id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
244355364
pes2o/s2orc
v3-fos-license
Electroreforming of Biomass for Value-Added Products Humanity’s overreliance on fossil fuels for chemical and energy production has resulted in uncontrollable carbon emissions that have warranted widespread concern regarding global warming. To address this issue, there is a growing body of research on renewable resources such as biomass, of which cellulose is the most abundant type. In particular, the electrochemical reforming of biomass is especially promising, as it allows greater control over valorization processes and requires milder conditions. Driven by renewable electricity, electroreforming of biomass can be green and sustainable. Moreover, green hydrogen generation can be coupled to anodic biomass electroforming, which has attracted ever-increasing attention. The following review is a summary of recent developments related to electroreforming cellulose and its derivatives (glucose, hydroxymethylfurfural, levulinic acid). The electroreforming of biomass can be achieved on the anode of an electrochemical cell through electrooxidation, as well as on the cathode through electroreduction. Recent advances in the anodic electroreforming of cellulose and cellulose-derived glucose and 5-hydrooxylmethoylfurural (5-HMF) are first summarized. Then, the key achievements in the cathodic electroreforming of cellulose and cellulose-derived 5-HMF and levulinic acid are discussed. Afterward, the emerging research focusing on coupling hydrogen evolution with anodic biomass reforming for the cogeneration of green hydrogen fuel and value-added chemicals is reviewed. The final chapter of this paper provides our perspective on the challenges and future research directions of biomass electroreforming. Introduction Climate change is arguably humanity's greatest challenge today. In 2019, an Intergovernmental Panel on Climate Change (IPCC) special report established several conditions required to restrict global temperature rise to 1.5 • C above pre-industrial levels. By 2030, carbon emissions would need to be halved, and by 2050, the net carbon released into the atmosphere must be zero [1]. Worryingly, according to IPCC's 6th assessment report [2], Earth is perilously close to breaching the 1.5 • C goal by 2030. Limiting temperature rise will require extensive effort, but inaction could result in sea level rise of 180 cm by 2100, resulting in damages to the tune of USD 27 trillion per year [3]. With additional temperature rise, considerable increases in heat-related mortality can be expected, with warmer and poorer regions experiencing a disproportionate burden [4]. Consequently, water stress and food shortages could increase in frequency and severity [5,6]. The window for intervening is open but rapidly dwindling [7]. Replacing the burning of fossil fuels as our main energy source will be crucial. The solid and gaseous wastes humankind produces from the consumption of plants (biomass) Figure 1. Schematic of biomass electroreforming. The most abundant lignocellulosic biomass is taken as an example. Electroreforming of the major component of lignocellulosic biomass, cellulose, or its derivatives, could offer value-added chemicals and green hydrogen fuel. Electroreforming can be powered by electricity from the grid or renewables, which makes it a promising method of renewable energy storage and green chemistry for a sustainable future. Glucose is a simple monosaccharide with molecular formula C6H12O6, and is the main source of energy for most life forms on earth. Apart from cellulose, glucose can also be stored in other polymeric forms, such as starch in plants or glycogen in animals. Glucose can be electrochemically converted to gluconic, glucaric and levulinic acids, as well as 5-HMF. These products can be further valorized into other valuable compounds through electrochemical oxidation (anodic reaction) or hydrogenation (cathodic reaction). Some common glucose transformations are shown in Figure 2. Electroreforming of the major component of lignocellulosic biomass, cellulose, or its derivatives, could offer value-added chemicals and green hydrogen fuel. Electroreforming can be powered by electricity from the grid or renewables, which makes it a promising method of renewable energy storage and green chemistry for a sustainable future. Glucose is a simple monosaccharide with molecular formula C 6 H 12 O 6 , and is the main source of energy for most life forms on earth. Apart from cellulose, glucose can also be stored in other polymeric forms, such as starch in plants or glycogen in animals. Glucose can be electrochemically converted to gluconic, glucaric and levulinic acids, as well as 5-HMF. These products can be further valorized into other valuable compounds through electrochemical oxidation (anodic reaction) or hydrogenation (cathodic reaction). Some common glucose transformations are shown in Figure 2. Glucose is a simple monosaccharide with molecular formula C6H12O6, and is the main source of energy for most life forms on earth. Apart from cellulose, glucose can also be stored in other polymeric forms, such as starch in plants or glycogen in animals. Glucose can be electrochemically converted to gluconic, glucaric and levulinic acids, as well as 5-HMF. These products can be further valorized into other valuable compounds through electrochemical oxidation (anodic reaction) or hydrogenation (cathodic reaction). Some common glucose transformations are shown in Figure 2. Herein, a mini review is presented to summarize the electroreforming of cellulose and its derivatives under different pH conditions using various catalysts. Depending on the targeted final products, the electroreforming of cellulose can be achieved via electrooxidation on the anode or electroreduction on the cathode. We will firstly discuss the recent advances in anodic electroreforming followed by cathodic electroreforming of cellulose and its derivatives. We will focus on the heterogenous catalysts employed and electroreformed products. Then, the emerging research interest on coupling hydrogen evolution (on cathode) with anodic biomass reforming is discussed. To conclude this mini review, our perspectives on the challenges and opportunity on biomass electroreforming are presented. Cellulose Cellulose monomers comprise two anhydroglucose rings of C 5 H 10 O 5 . The rings share an ether (C-O-C) bond between carbon-1 of a glucose ring and carbon-4 of the other, also known as a β-1-4-glycosidic bond. Additionally, intramolecular hydrogen bonding between the hydroxyl group and oxygen on adjacent glucose rings straightens and stabilizes cellulose chains. Intermolecular hydrogen bonding between adjoining cellulose chains also promotes stability and forms crystalline structure [33]. Cellulose exists in four polymorphic varieties: I, II, III and VI. Cellulose I will be focused on here, since it is the natural cellulose found in plant matter, and can be used to form other polymorphs. The earliest study passing electricity through cellulose was reported in 1947. O'Sullivan investigated passing current through regenerated cellulose with varying salt and moisture contents to understand the conductance properties of cellulose. In 1963, Murphy performed the first electrolysis of cellulose and observed that hydrogen gas was produced at the anode. However, studies of cellulose electrolysis remained relatively scarce and sporadic until the 21st century, when ever-increasing demand for green and sustainable chemistry appeared. In 2010, Sugano et al. performed electrooxidation of cellulose at a polycrystalline gold (Au) electrode at a pH of 14 to understand its mechanism [34]. Cellulose powder was first dissolved in NaOH using the freeze-thaw technique, after which the cellulose's structure was no longer crystalline, suggesting the breakage of intra-and intermolecular hydrogen bonding, which is verified by both microscopic images and X-ray diffraction (XRD) spectroscopy, as presented in Figure 3a,b, respectively. Cyclic voltammetry (CV) revealed two oxidation peaks for dissolved cellulose but not for undissolved cellulose. To explore the effects of particle sizes, ball mill crushing was used to generate cellulose particles of two size ranges, 500 and 100 nm, as shown in Figure 3c,d. Smaller particles led to 13% higher peak current at similar applied potentials, indicating that ball mill pretreatment was efficient in promoting the dissolution of cellulose. Fourier transform infrared imaging (FTIR) scans (Figure 3e) suggested the formation of carboxyl groups, confirming the oxidation of cellulose. To produce direct electricity from oxidation of cellulose nanoparticles, the fuel cell was created and could attain maximum power density of 44 mW/m 2 . Later, their team studied the electroreforming mechanism in a similar alkaline medium [35]. CV scans suggested that cellulose electrooxidation is irreversible and diffusioncontrolled. FTIR was conducted during CV to understand interactions between cellulose and the Au electrode, which suggests that the adsorbed cellulose displaces OHions near the electrode surface; and during oxidation, the strength of intermolecular H bonds decreases while that of intramolecular bonds increases. The authors proposed the reaction pathway, as illustrated in Figure 4. Firstly, OHions adsorb onto the surface of gold electrodes to form OH-Au sites. These active sites then allow cellulose to adhere electrochemically with Au electrode surface. Cellulose is oxidized and remains adsorbed until reversed potential is applied. The authors hypothesized that the adsorption and desorption of OHions play an important catalytic role in cellulose electrooxidation. Nuclear magnetic resonance (NMR) spectra and scanning electron microscope (SEM) imaging suggested the differences in the structure of cellulose after electroreforming; however, the exact products were not identified. (a) Microscopic images before (i) and after (ii) dissolution. (b) X-ray diffraction of cellulose before (i) and after (ii) dissolution. (c) cyclic voltammogram (vs. Ag/AgCl) of cellulose ball milled to average particle size of 100 nm (i), 500 nm (ii) and without cellulose (iii). (d) size distribution after ball milling to 100 nm (i) and 500 nm (ii). (e) FTIR scans of cellulose before dissolution (i), after dissolution (ii) and after oxidation (iii). Reprinted with permission from Ref. [34]. Copyright 2010, Electroanalysis. Later, their team studied the electroreforming mechanism in a similar alkaline medium [35]. CV scans suggested that cellulose electrooxidation is irreversible and diffusion-controlled. FTIR was conducted during CV to understand interactions between cellulose and the Au electrode, which suggests that the adsorbed cellulose displaces OH − ions near the electrode surface; and during oxidation, the strength of intermolecular H bonds decreases while that of intramolecular bonds increases. The authors proposed the reaction pathway, as illustrated in Figure 4. Firstly, OH − ions adsorb onto the surface of gold electrodes to form OH-Au sites. These active sites then allow cellulose to adhere electrochemically with Au electrode surface. Cellulose is oxidized and remains adsorbed until reversed potential is applied. The authors hypothesized that the adsorption and desorption of OH − ions play an important catalytic role in cellulose electrooxidation. Nuclear magnetic resonance (NMR) spectra and scanning electron microscope (SEM) imaging suggested the differences in the structure of cellulose after electroreforming; however, the exact products were not identified. Micromachines 2021, 12, x 6 of A notable investigation was conducted in 2016, when Xiao and coworkers reporte the use of gold nanoparticles (AuNP) for cellulose electroreforming [36]. Motivated b similar works on cellobiose [37,38], the authors explored the use of nitric acid (HNO pretreated carbon aerogel with AuNP as an anode, as shown in Figure 5. Carbon aerog was fabricated as per a previous report [39], before acid pretreatment. A mixture contai ing Au precipitates was prepared by reacting HAuCl4 and NaB4 [40]. Deposition of AuN onto carbon aerogel was performed by repeatedly dipping aerogel into the mixture an drying it. The anode was used to oxidize cellulose in 0.125 M NaOH (pH of 13.1) wi insufflated air. Introduction of oxygen was proposed to speed up the formation of acti oxygen species to accelerate oxidation. The authors compared three anodes to study th activities of AuNP with different sizes: 50 nm gold particles on graphite, 50 nm gold pa ticles on carbon aerogel, and 10 nm gold particles on carbon aerogel, as shown in Figu 5a-c. AuNP sizes were calculated from XRD data ( Figure 5d) and supported by SE measurements. Cellulose oxidation was controlled at 10 mA/cm 2 and products were eva uated with high-performance liquid chromatography (HPLC), as presented in Figure 5 Comparing anodes consisting of 10 and 50 nm AuNP on the same supporting su strate (CA), one can see that the 50 nm NP anode also obtained high cellulose conversio but selectivity towards gluconate was significantly lower, suggesting that the size AuNP influences the selectivity of oxidation products. Reducing aeration did not chan oxidation products but decreased the rate of reaction. Ten-nanometer gold particles o acid-pretreated carbon aerogel produced gluconic acid at a 67.8% yield with a conversio yield of at least 88.9%. Reprinted with permission from Ref. [35]. Copyright 2014, ChemSusChem. A notable investigation was conducted in 2016, when Xiao and coworkers reported the use of gold nanoparticles (AuNP) for cellulose electroreforming [36]. Motivated by similar works on cellobiose [37,38], the authors explored the use of nitric acid (HNO 3 )-pretreated carbon aerogel with AuNP as an anode, as shown in Figure 5. Carbon aerogel was fabricated as per a previous report [39], before acid pretreatment. A mixture containing Au precipitates was prepared by reacting HAuCl 4 and NaB 4 [40]. Deposition of AuNP onto carbon aerogel was performed by repeatedly dipping aerogel into the mixture and drying it. The anode was used to oxidize cellulose in 0.125 M NaOH (pH of 13.1) with insufflated air. Introduction of oxygen was proposed to speed up the formation of active oxygen species to accelerate oxidation. The authors compared three anodes to study the activities of AuNP with different sizes: 50 nm gold particles on graphite, 50 nm gold particles on carbon aerogel, and 10 nm gold particles on carbon aerogel, as shown in Figure 5a-c. AuNP sizes were calculated from XRD data ( Figure 5d) and supported by SEM measurements. Cellulose oxidation was controlled at 10 mA/cm 2 and products were evaluated with high-performance liquid chromatography (HPLC), as presented in Figure 5e. When comparing anodes with the same 10 nm gold particles on different supporting substrates, the pretreated carbon aerogel supports resulted in better cellulose conversion than that of graphite support. The authors hypothesized that acid pretreatment of CA led to a higher surface area and oxidized surface carbon, favoring the adsorption of cellulose Comparing anodes consisting of 10 and 50 nm AuNP on the same supporting substrate (CA), one can see that the 50 nm NP anode also obtained high cellulose conversion, but selectivity towards gluconate was significantly lower, suggesting that the size of AuNP influences the selectivity of oxidation products. Reducing aeration did not change oxidation products but decreased the rate of reaction. Ten-nanometer gold particles on acid-pretreated carbon aerogel produced gluconic acid at a 67.8% yield with a conversion yield of at least 88.9%. When comparing anodes with the same 10 nm gold particles on different supporting substrates, the pretreated carbon aerogel supports resulted in better cellulose conversion than that of graphite support. The authors hypothesized that acid pretreatment of CA led to a higher surface area and oxidized surface carbon, favoring the adsorption of cellulose molecules. Sugano et al. also attempted to understand cellulose electroreforming mechanism using AuNPs, again in alkaline (pH of 14) conditions [41]. Carbon paper supports were loaded with AuNPs through the chemical precipitation-deposition method. The electrodes were then calcinated at different temperatures (40,250, and 350 • C). The author observed that calcination at 40 • C led to the formation of Au + (52.5%) and Au 3+ (30%), while calcinations at higher temperature led to completely metallic AuNPs. Moreover, calcination at 250 • C produced AuNPs with sizes ranging in a narrow band (10-25 nm), whereas a further increased temperature of 350 • C resulted in larger clusters (20-30 nm). CVs were conducted with 1.3 M NaOH against a Ag/AgCl reference, with and without (1 wt%) cellulose present. Anode calcinated at 40 • C displayed no electrocatalytic activity. Moreover, the anode calcinated at 350 • C shows lower electrocatalytic activity than the anode calcinated at 250 • C, indicating that small AuNPs (<25 nm) have higher activity. Meng et al. electrolyzed cellulose powder in 0.5 M sulfuric acid (pH of 0.3), rather than in alkali solution, using lead/lead dioxide (Pb/PbO 2 ) electrodes [42], as depicted in Figure 6a. Under ambient conditions, with a controlled current density of 30 mA/cm 2 for 8 h, the degree of polymerization (DP) at average decreased from 1100 to 367. Conversely, the sample that was exposed only to acid had a final DP of 840, as displayed in Figure 6b,c. XRD measurement showed a reduced degree of crystallinity, while FTIR showed lower intensities associated with H and C-O-C bonds without the formation of new groups. A maximum soluble sugar content yield of 2.5% and the highest 5-HMF yield of 1.8% were obtained. Meng and coworkers proposed an electrocatalytic depolymerization process, as presented in Figure 7. Firstly, water molecules lose electrons at the anode to form hydroxyl radicals (OH − ). Next, the OHattacks the C-H bond and removes the hydrogen atom from carbon-4 of the glucose unit, leaving a carbon radical (C − ) in cellulose. C − is then oxidized to become a superoxide radical (O ). Finally, the glycosidic bond is cleaved through the removal of the superoxide radical. Meng and coworkers proposed an electrocatalytic depolymerization process, as presented in Figure 7. Firstly, water molecules lose electrons at the anode to form hydroxyl radicals (OH − ). Next, the OHattacks the C-H bond and removes the hydrogen atom from carbon-4 of the glucose unit, leaving a carbon radical (C − ) in cellulose. C − is then oxidized to become a superoxide radical (O ). Finally, the glycosidic bond is cleaved through the removal of the superoxide radical. Glucose Glucose is the most abundant monosaccharide on Earth [43]. Electrolysis of glucose was reported as early as 1866 to produce acetaldehyde; early reports also speculated on the possible formation of alcohol and many other substances [44]. Indeed, studies have revealed a variety of useful products from glucose electroreforming, including 5-HMF, gluconic acid, glucaric acid, gluconolactone and levulinic acid, among others. The synthesis of glucaric acid in particular has been widely acknowledged for its high value. With applications in many industries, glucaric acid has being listed as one of the most valueadded chemicals by the US Department of Energy [45][46][47]. However, conventional methods for glucaric acid production often require intense pressures, temperatures and harsh chemicals [48]. Gluconic acid is an intermediate towards the production of glucaric acid, and by itself has other commercial applications [49,50]. Levulinic acid and 5-HMF are platform compounds for producing other useful chemicals [51][52][53]. Glucose Glucose is the most abundant monosaccharide on Earth [43]. Electrolysis of glucose was reported as early as 1866 to produce acetaldehyde; early reports also speculated on the possible formation of alcohol and many other substances [44]. Indeed, studies have revealed a variety of useful products from glucose electroreforming, including 5-HMF, gluconic acid, glucaric acid, gluconolactone and levulinic acid, among others. The synthesis of glucaric acid in particular has been widely acknowledged for its high value. With applications in many industries, glucaric acid has being listed as one of the most valueadded chemicals by the US Department of Energy [45][46][47]. However, conventional methods for glucaric acid production often require intense pressures, temperatures and harsh chemicals [48]. Gluconic acid is an intermediate towards the production of glucaric acid, and by itself has other commercial applications [49,50]. Levulinic acid and 5-HMF are platform compounds for producing other useful chemicals [51][52][53]. In 2014, Bin et al. optimized conversion of glucose to glucaric and gluconic acids with nano-manganese dioxide (MnO 2 ) loaded tubular porous titanium (Ti) electrodes in a flow-through electrolytic cell, as shown in Figure 8a,b [54]. Interestingly, unlike most other studies, pH was concluded to be less significant in this case-increasing pH from 2 to 10 only changed glucose conversion slightly, from 90% to 93%. Selectivity to gluconic acid (GLA) and glucaric acid (GA) rose from 87% at a pH of 2 to a maximum of 94% at a pH of 7, before decreasing to 78% at pH 10. The flow-through cell (Figure 8b) was believed to limit the expansion of the electrochemical diffusion layer and promote glucose access to anode surface by convection. The effects of current density on glucose conversion to product selectivity were also studied ( Figure 8c). With MnO 2 loading of 4.98%, glucose concentration of 50.5 mM, temperature of 30 • C, pH of 7, residence time of 19 min, and current density of 4 mA/cm 2 , 98% glucose conversion was achieved with selectivity towards gluconic acid and glucaric acid of 43% and 55%, respectively. Increasing current density to 6 mA/cm 2 further led to 99% glucose conversion, with gluconic and glucaric selectivities of 15% and 84%, respectively. anode surface by convection. The effects of current density on glucose conversion to prod-uct selectivity were also studied ( Figure 8c). With MnO2 loading of 4.98%, glucose concentration of 50.5 mM, temperature of 30 °C, pH of 7, residence time of 19 min, and current density of 4 mA/cm 2 , 98% glucose conversion was achieved with selectivity towards gluconic acid and glucaric acid of 43% and 55%, respectively. Increasing current density to 6 mA/cm 2 further led to 99% glucose conversion, with gluconic and glucaric selectivities of 15% and 84%, respectively. In 2017, Solmi et al. investigated the effects of varying ratios of different reactants (glucose to NaOH, glucose to metal catalyst, and glucose concentration levels), temperature and pressure, on glucose conversion to glucaric acid [55]. Under optimal conditions using AuNPs on activated carbon support, the highest yields of glucaric acid, gluconic acid and other by-products attained were 31%, 18% and 40%, respectively. The authors proposed that the optimal glucose concentration was 5% and that the molar ratio of glucose to metal should be 500:1 for obtaining glucaric acid. Moggia et al. compared electrooxidative performance of bare copper (Cu), platinum (Pt) and Au at varying potentials for different functional groups [56]. The researchers first compared CVs of 0.04 M glucose added to 0.1 M sodium hydroxide with Cu, Pt and Au electrodes, and concluded that glucose oxidation is strongly correlated with metal catalysts. Next, gluconic acid, glucuronic acid and glucaric acid were used to identify if different functional groups are oxidized by metal catalysts. Cu was found to selectively oxidize glucose aldehyde group at 0.8-1.2 V, but not the hydroxymethyl group. Pt oxidized hydroxymethyl groups at lower potentials and aldehyde groups at higher potentials, while Au oxidized hydroxymethyl at higher potentials and aldehydes at lower ones. Finally, to minimize competing side reactions (glucose isomerization and sugar degradation [57]) the authors performed glucose electroreforming at 5 °C at pH 13 for all three metal catalysts. Although good selectivity to glucaric acid was observed at lower potential (38.4% at 0.86 V), current densities with Cu were too low. Increasing potentials resulted in formic acid as the major product. Pt and Au exhibited similar catalytic action: at low In 2017, Solmi et al. investigated the effects of varying ratios of different reactants (glucose to NaOH, glucose to metal catalyst, and glucose concentration levels), temperature and pressure, on glucose conversion to glucaric acid [55]. Under optimal conditions using AuNPs on activated carbon support, the highest yields of glucaric acid, gluconic acid and other by-products attained were 31%, 18% and 40%, respectively. The authors proposed that the optimal glucose concentration was 5% and that the molar ratio of glucose to metal should be 500:1 for obtaining glucaric acid. Moggia et al. compared electrooxidative performance of bare copper (Cu), platinum (Pt) and Au at varying potentials for different functional groups [56]. The researchers first compared CVs of 0.04 M glucose added to 0.1 M sodium hydroxide with Cu, Pt and Au electrodes, and concluded that glucose oxidation is strongly correlated with metal catalysts. Next, gluconic acid, glucuronic acid and glucaric acid were used to identify if different functional groups are oxidized by metal catalysts. Cu was found to selectively oxidize glucose aldehyde group at 0.8-1.2 V, but not the hydroxymethyl group. Pt oxidized hydroxymethyl groups at lower potentials and aldehyde groups at higher potentials, while Au oxidized hydroxymethyl at higher potentials and aldehydes at lower ones. Finally, to minimize competing side reactions (glucose isomerization and sugar degradation [57]) the authors performed glucose electroreforming at 5 • C at pH 13 for all three metal catalysts. Although good selectivity to glucaric acid was observed at lower potential (38.4% at 0.86 V), current densities with Cu were too low. Increasing potentials resulted in formic acid as the major product. Pt and Au exhibited similar catalytic action: at low potentials (0.55-0.70 V), both exhibited high selectivity to gluconic acid (78.4-86.8%). Increasing potential to 1.34 V for Pt and prolonged electrolysis at 0.70 V for Au produced glucaric acid selectivities of 13.5% and 12.6%, respectively. Moggia and co-workers then went on to optimize conditions for each of the two oxidation steps at Au anodes: (1) glucose to gluconic acid, and (2) gluconic acid to glucaric acid [58]. For the first step, pH, glucose concentration, and temperature were all found to influence conversion. Optimal parameters of pH of 11.3, glucose concentration of 0.04 M, temperature of 5 • C and potential of 0.6 V resulted in the highest gluconic acid selectivity of 97.6%, with glucose conversion of 25% across a 6 h reaction. Increasing conversion by elevating pH or temperature produced lower selectivity. Increasing glucose concentration likely affected mass transfer to Au active sites, which also reduced selectivity. On the other hand, none of the above parameters were significant for oxidizing gluconic acid to glucaric acid. Instead, applied potential considerably influenced the product distribution. As illustrated, the maximum selectivity of 89.5% was attained at 1.1 V vs. RHE. Unfortunately, gluconic acid conversion was low (4.6%). Nevertheless, the highest possible glucaric acid concentration obtained was 1.2 mM. Furthermore, the drastic drop in current density was observed after a few hours, likely due to the adsorption of glucaric acids at Au active sites, as reported previously [59]. Liu et al. used both nanostructured bimetallic nickel-iron oxide (NiFeO x ) and nitride (NiFeN x ) electrodes for gluconic and glucaric acid production [60]. NiFeO x nickel foam (anode) was used in 0.5 M glucose and 1 M potassium hydroxide, while NiFeN x nickel foam was used as the cathode in 1 M KOH. At a constant applied voltage of 1.4 V, glucose conversion of 21.3% was attained, and glucaric acid and gluconic acid yields were 11.6% and 4.7%, respectively. The Faradaic efficiencies for both glucaric and gluconic acids were 87%. The current density at 1.4 V decreased from 101.2 to 97.8 mA/cm 2 over a 24 h run. The authors' technoeconomic analysis suggests that this method produces glucaric acid at 54% lower cost compared to conventional production methods. In 2021, Neha et al. created platinum-bismuth alloy (Pt 9 -Bi 1 ) electrocatalyst on glassy carbon electrode for glucose conversion to gluconic acid, accompanied by methyl-glucoside conversion to methyl-glucuronate [61]. In an electrolyte consisting of 0.1 M NaOH (0.1 M glucose added), linear sweep voltammetry (LSV) scans showed the onset voltage to be less than 0.06 V and a broad peak at 4.58 mA/cm 2 around 0.6 to 0.8 V. Chronoamperometric measurement at a fixed potential of 0.3 V was conducted with Pt 9 -Bi 1 /C anode and Pt/C cathode for 6 h, in a filter press cell. After about 90 min, the current halved (~0.020 to 0.010 A) and plateaued at around 0.005 A. These readings suggest that some poisoning occurred. The product after a 6 h reaction was confirmed to be gluconate with 100% selectivity using HPLC and NMR, with 40% glucose conversion. Poisoning of electrodes is a significant challenge in scaling up glucose electroreforming, suspected to be caused by the action of reaction intermediates [62]. In 2005, Tominaga et al. compared glucose electrooxidation between pure Au plate and AuNPs (2 nm in diameter) on carbon electrode [59], as shown in Figure 9a. While CV scans suggested a similar voltametric response, gold nanoparticle catalysts exhibited significantly smaller decreases in current over time, displaying better resistance to poisoning, as displayed in Figure 9b. The reduction in current density was mitigated by increasing pH. Additionally, Tominaga et al. identified that high selectivity towards gluconate can be obtained at a high pH of 13.7, while electroreforming at neutral conditions (pH of 7) produced a mixture of gluconate and oxalate. Similarly, applied potential can be another factor to influence products, as seen in Figure 9c. Most later studies have therefore focused on employing nanoparticle electrocatalysts in alkali media. 5-Hydroxylmethylfurfural 5-hydroxylmethylfurfural (5-HMF) was included in a 2010 revision of the US Depart ment of Energy's list for most valuable chemicals due to its versatility in forming a wide range of useful chemicals. In particular, one of its products, 2,5-furandicarboxylic acid 5-Hydroxylmethylfurfural 5-hydroxylmethylfurfural (5-HMF) was included in a 2010 revision of the US Department of Energy's list for most valuable chemicals due to its versatility in forming a wide range of useful chemicals. In particular, one of its products, 2,5-furandicarboxylic acid (FDCA), has been widely acknowledged for its potential to replace polyethylene terephthalate (PET) in plastic production with comparable mechanical strength and superior cost savings [63][64][65]. The recent advances in anodic electroreforming of 5-HMF will be discussed in this section. 5-Hydroxylmethylfurfural 5-hydroxylmethylfurfural (5-HMF) was included in a 2010 revision of the US Department of Energy's list for most valuable chemicals due to its versatility in forming a wide range of useful chemicals. In particular, one of its products, 2,5-furandicarboxylic acid (FDCA), has been widely acknowledged for its potential to replace polyethylene terephthalate (PET) in plastic production with comparable mechanical strength and superior cost savings [63][64][65]. The recent advances in anodic electroreforming of 5-HMF will be discussed in this section. Weidner et al. investigated the electrooxidation of 5-HMF to FDCA with bimetallic cobalt-metalloid alloys to replace OER in water splitting [67]. Among cobalt phosphide (CoP), cobalt boride (CoB), cobalt telluride (CoTe), dicobalt silicide (Co 2 Si) and cobalt arsenide (CoAs), CoB had the highest current activity (2.69 mA/cm 2 at 1.45 V), where OER is negligible, and the lowest onset potential for 1 mA/cm 2 was 1.39 V, as displayed in Figure 11a Weidner et al. investigated the electrooxidation of 5-HMF to FDCA with bimetallic cobalt-metalloid alloys to replace OER in water splitting [67]. Among cobalt phosphide (CoP), cobalt boride (CoB), cobalt telluride (CoTe), dicobalt silicide (Co2Si) and cobalt arsenide (CoAs), CoB had the highest current activity (2.69 mA/cm 2 at 1.45 V), where OER is negligible, and the lowest onset potential for 1 mA/cm 2 was 1.39 V, as displayed in Fig A flow reactor was then constructed with CoB-doped Ni foam as a positive electrode and Ni foam as a negative electrode, separated by anion exchange membrane. Figure 12 a,b show the smooth surface of the Ni foam, whereas Figure 12c,d reveal the rough appearance of the CoB-doped foam due to agglomerations of CoB after spray coating deposition. As illustrated in Figure 12e, the current density at anodic potential of 1.45 V was 55 mA/cm 2 with additional 10 mM 5-HMF. Furthermore, by maintaining potential at 1.45 V, complete conversion of 5-HMF was realized. The FDCA yield attained was 94% and Faradaic efficiency was 98%, as seen in Figure 12f. Notably, the degradation of 5-HMF into humin-type structures, typically observed at high pH, was suppressed. Remarkably, the current density at the applied voltage (1.45 V) recorded at 55 mA/cm 2 , much lower than the 1.63 V required for the same current density at OER. A flow reactor was then constructed with CoB-doped Ni foam as a positive electrode and Ni foam as a negative electrode, separated by anion exchange membrane. Figure 12 a,b show the smooth surface of the Ni foam, whereas Figure 12c,d reveal the rough appearance of the CoB-doped foam due to agglomerations of CoB after spray coating deposition. As illustrated in Figure 12e, the current density at anodic potential of 1.45 V was 55 mA/cm 2 with additional 10 mM 5-HMF. Furthermore, by maintaining potential at 1.45 V, complete conversion of 5-HMF was realized. The FDCA yield attained was 94% and Faradaic efficiency was 98%, as seen in Figure 12f. Notably, the degradation of 5-HMF into humintype structures, typically observed at high pH, was suppressed. Remarkably, the current density at the applied voltage (1.45 V) recorded at 55 mA/cm 2 , much lower than the 1.63 V required for the same current density at OER. Suspecting that highly alkali conditions favor the degradation of 5-HMF into insoluble humin products, Nam et al. used an electrolyte with a pH of 13 (0.1 M KOH), rather than a pH of 14, for electroreforming 5-HMF to FDCA [68]. Nanocrystalline Cu foam was used as the electrocatalyst due to Cu's high overpotential requirement for the competing reaction (i.e., oxygen evolution). LSV scans of 5-HMF and intermediates confirmed that complete conversion 5-HMF to FDCA can be accomplished without oxygen evolution, as shown in Figure 13a. Anodic potential of 1.62 V with 0.1 mM 5-HMF at pH of 13 led to 99.9% of 5-HMF conversion, 96.4% of FDCA yield, and 95.3% of Faradaic efficiency. The authors also tested bulk Cu electrode, while maintaining high conversion (99.1%), achiev- Suspecting that highly alkali conditions favor the degradation of 5-HMF into insoluble humin products, Nam et al. used an electrolyte with a pH of 13 (0.1 M KOH), rather than a pH of 14, for electroreforming 5-HMF to FDCA [68]. Nanocrystalline Cu foam was used as the electrocatalyst due to Cu's high overpotential requirement for the competing reaction (i.e., oxygen evolution). LSV scans of 5-HMF and intermediates confirmed that complete conversion 5-HMF to FDCA can be accomplished without oxygen evolution, as shown in Figure 13a. Anodic potential of 1.62 V with 0.1 mM 5-HMF at pH of 13 led to 99.9% of 5-HMF conversion, 96.4% of FDCA yield, and 95.3% of Faradaic efficiency. The authors also tested bulk Cu electrode, while maintaining high conversion (99.1%), achieving lower FDCA yield (80.8%) and Faradaic efficiency (79.9%). Although LSV scans indicated lower activity for bulk Cu, with a potential (at 1 mA/cm 2 ) of 1.58 V compared to 1.45 V for nanoparticle Cu, it was still significantly lower than the 1.8 V required for OER. From the product analysis, substantial amounts of FFCA were found, suggesting that oxidation of FFCA to FDCA was the rate limiting step. In 2019, Taitt et al. went on to compare performances of various transition metal oxyhydroxides such as NiOOH, CoOOH and FeOOH for the oxidation of 5-HMF into FDCA [69]. 5-HMF oxidation was induced at the lowest potential using CoOOH. However, the authors reported that the current density obtained was too low for practical use, and increasing potentials further resulted in OER. On the other hand, FeOOH exhibited no catalytic activity at potentials below OER. Among those tested in 0.1 M KOH with 5 mM 5-HMF, NiOOH appeared to be the best catalyst, attaining 99.8% conversion, 96% yield and 96% Faradaic efficiency at 1.47 V (Figure 13b). These similar results were also obtained previously by Liu's and coworkers [66]. Kang et al. then investigated the activities of Co-based oxides as electrocatalysts [70]. Specifically, nickel cobaltite (NiCo 2 O 4 ) and cobalt (II, III) oxide (Co 3 O 4 ) were deposited onto Ni foam. It can be observed from Figure 13c that the OER onset potentials for NiCo 2 O 4 and Co 3 O 4 with 1 M KOH were 1.47 and 1.42 V, respectively. However, both dropped to~1.2 V after adding 5-HMF. NiCo 2 O 4 s lower Tafel slope indicated the ease of promoting activity, which was confirmed through LSV scans using intermediates (HMFCA and FFCA). At the anodic potential of 1.5 V with 1 M KOH with 5 mM 5-HMF, the conversion and selectivity of 5-HMF to FDCA were 99.6% and 90.8%, respectively. It is noted that a higher portion of Co 3+ was reduced to Co 2+ in NiCo 2 O 4 , which accounted for its more superior performance. Huang et al. introduced oxygen vacancies by doping CoO with selenium (Se) to form CoO-CoSe 2 electrocatalyst with Co:Se molar ratio of 23:1 [72]. The catalysts were dispersed onto carbon paper support, and electrochemical activities investigated through LSV in 1 M KOH, with and without 10 mM 5-HMF. It can be observed from Figure 13d that the onset potential without 5-HMF was reported to be 1.5 V and with 10 mM 5-HMF was 1.3 V. Overall, CoO-CoSe 2 showed better performance than CoO and CoSe 2 . In a three-electrode cell, anodic potential of 1.43 V fully converted 5-HMF to FDCA with a yield of 99% and Faradaic efficiency of 97.9%. After five cycles, no decrease in yields and conversions were observed. Likewise, a similar setup with a carbon paper-only electrode (without CoO-CoSe 2 ) was performed as a reference and showed significantly lower 5-HMF conversion of 58.3%, with trace FDCA (0.5%) and significant humin-type products. In 2021, Hu et al. prepared tungsten trioxide (WO 3 ) on Ni foam at relatively low temperatures (180 • C) using varying amounts of polyethylene glycol (PEG) additive [73]. Without 5-HMF, onset potential and potential at 20 mA/cm 2 were 1.32 and 1.6 V, respectively, while the addition of 5-HMF reduced them to 1.18 and 1.44 V, respectively. Conducting CVs at varying scan rates revealed that WO 3 /Ni 0.18 (0.18 g of PEG per cm 2 of electrode) had the largest electrochemically active surface area. In a 3-electrode electrochemical cell with 1 M potassium hydroxide and 5 mM 5-HMF at anodic 1.57 V, WO 3 /Ni 0.18 resulted in conversion of 88.6%, an FDCA yield of 81.5% and a Faradaic efficiency of 79.5%. Other Biomass Derivatives Minor research efforts have been devoted to other cellulose-derived biomass such as levulinic acid (C5H8O3). In 2015, Dos Santos et al. performed the oxidation of levulinic acid to 2,7-octanedione with Pt anode in aqueous solution or methanol at pH of 5.5 [76]. Methanol resulted in much a higher Faradaic efficiency (86% vs. 5%) and higher selectivity (47% vs. 27%) than water-based electrolytes, while using water allowed for slightly higher conversions (74% vs. 60%). 4-hydroxy-2-butanone was also produced directly, using Pt at 6 V in 0.2 M NaOH, resulting in 6% selectivity and 5% Faradaic efficiency. The major oxidation product (~45% selectivity) was identified as 3-buten-2-one. Further electrochemical hydrogenation and oxidation of products were also investigated. For instance,1,3-butanediol and 1-butanol were obtained from 4-hydroxy-2-butanone, and octane from valeric acid, as mentioned in [77]. Cathodic electroreforming of levulinic acid will be further discussed in later sections. Glycerol (C3H8O3) is a by-product triol obtained through the production of biofuel [78]. The growing popularity of biofuel synthesis in the last decade has resulted in a larger supply of glycerol, spurring extensive research into glycerol electroreforming, which is summarized in the reviews [79][80][81]. Most research employs precious metal electrocatalysts for glycerol electroreforming to valuable chemicals. However, in 2019, Liu et al. successfully used CuO to synthesize 1,3-dihydroxyacetone (DHA), a commercially valuable chemical with applications in cosmetics and polymer industries [82,83]. At the applied current density (3 mA/cm 2 ) and pH level 9, a selectivity of 60% to DHA was achieved. In 2021, Vo et al. utilized CoOx catalysts for anodic glycerol valorization at a pH of 9 [84]. Operando characterization was performed to track the surface species of electrocatalysts, and oxidation pathways were proposed, as shown in Figure 14. CoOx was first Wang et al. performed sulfidation of Ni foam at 120 • C under hydrothermal conditions to produce nickel subsulfide (Ni 3 S 2 ) on Ni foam [74]. Further investigations revealed the presence of Ni 2+ and Ni 0 species. In 1 M KOH, water oxidation occurred at 1.63 V with 10 mA/cm 2 . Upon the addition of 10 mM 5-HMF, the required potential fell to 1.43 V, as displayed in Figure 13e. In a three-electrode configuration with a graphite counter electrode, maintained at 1.498 V, conversion of 100%, an FDCA yield of 98.3% and Faradaic efficiency of 93.5% were obtained. A control experiment with only Ni foam was conducted to reveal low yield of FDCA (52%), which supports the enhanced electrocatalytic properties of Ni 3 S 2 . In contrast to conventional alkali media, Kubota and Choi investigated the oxidation of 5-HMF to FDCA at a pH of 1 [75]. The authors aimed to induce FDCA precipitation at low pH (<2-3), allowing for easier extraction. Using a manganese oxide (MnO x ) anode, LSVs were performed in 0.05 M sulfuric acid (H 2 SO 4 ), with and without 20 mM 5-HMF and intermediates, as shown in Figure 13f. This confirmed the oxidation of 5-HMF oxidation was favored over OER. Notably, 1.49 V was needed for 1 mA/cm 2 for 5-HMF oxidation. As a control, Pt was scanned under the same conditions, and was found to exhibit insignificant electrolytic activity. In a three-electrode cell setup, the anodic potential of 1.6 V and temperature of 60 • C were maintained. Elevated temperatures improved the kinetics and solubility of FDCA, which limited precipitation of FDCA on anode. After the reaction, the temperature was lowered to precipitate FDCA, as well as intermediate 5-formyl-2furancarboxylic acid (FFCA). Almost all 5-HMF were converted (up to 99.9%), while it yielded 53.8% of FDCA and the Faradaic efficiency was 33.8%. Apart from FDCA intermediates, maleic acid was also measured with a yield of 21.9%. Other Biomass Derivatives Minor research efforts have been devoted to other cellulose-derived biomass such as levulinic acid (C 5 H 8 O 3 ). In 2015, Dos Santos et al. performed the oxidation of levulinic acid to 2,7-octanedione with Pt anode in aqueous solution or methanol at pH of 5.5 [76]. Methanol resulted in much a higher Faradaic efficiency (86% vs. 5%) and higher selectivity (47% vs. 27%) than water-based electrolytes, while using water allowed for slightly higher conversions (74% vs. 60%). 4-hydroxy-2-butanone was also produced directly, using Pt at 6 V in 0.2 M NaOH, resulting in 6% selectivity and 5% Faradaic efficiency. The major oxidation product (~45% selectivity) was identified as 3-buten-2-one. Further electrochemical hydrogenation and oxidation of products were also investigated. For instance,1,3-butanediol and 1-butanol were obtained from 4-hydroxy-2-butanone, and octane from valeric acid, as mentioned in [77]. Cathodic electroreforming of levulinic acid will be further discussed in later sections. is a by-product triol obtained through the production of biofuel [78]. The growing popularity of biofuel synthesis in the last decade has resulted in a larger supply of glycerol, spurring extensive research into glycerol electroreforming, which is summarized in the reviews [79][80][81]. Most research employs precious metal electrocatalysts for glycerol electroreforming to valuable chemicals. However, in 2019, Liu et al. successfully used CuO to synthesize 1,3-dihydroxyacetone (DHA), a commercially valuable chemical with applications in cosmetics and polymer industries [82,83]. At the applied current density (3 mA/cm 2 ) and pH level 9, a selectivity of 60% to DHA was achieved. In 2021, Vo et al. utilized CoO x catalysts for anodic glycerol valorization at a pH of 9 [84]. Operando characterization was performed to track the surface species of electrocatalysts, and oxidation pathways were proposed, as shown in Figure 14. CoO x was first electrochemically oxidized into oxyhydroxides to form active sites. Then, glycerol underwent indirect electron transfer to be incompletely oxidized to DHA or glyceraldehyde, or completely oxidized to formic acid. Spectroscopy results indicated that incomplete oxidation was more likely to occur, although formic acid was present at all applied potentials. At an applied potential of 1.5 V, a DHA selectivity of 60% was attained. electrochemically oxidized into oxyhydroxides to form active sites. Then, glycerol underwent indirect electron transfer to be incompletely oxidized to DHA or glyceraldehyde, or completely oxidized to formic acid. Spectroscopy results indicated that incomplete oxidation was more likely to occur, although formic acid was present at all applied potentials. At an applied potential of 1.5 V, a DHA selectivity of 60% was attained. Sorbitol (C6H14O6) is a biomass-derived polyol identified as a promising platform chemical [46]. While most electrochemical studies on sorbitol focused on anodic oxidation in fuel cells, in 2019, Kwon et al. electrochemically oxidized sorbitol to glucose, gulose, fructose and sorbose on a Sb-modified Pt anode in a Bi-saturated pH 3 solution [85]. Although high selectivity to a single product was not obtained, selectivity to value-added products was shown to be influenced by potential. The researchers suggest that more studies might uncover a new pathway to electrochemically convert glucose to fructose Sorbitol (C 6 H 14 O 6 ) is a biomass-derived polyol identified as a promising platform chemical [46]. While most electrochemical studies on sorbitol focused on anodic oxidation in fuel cells, in 2019, Kwon et al. electrochemically oxidized sorbitol to glucose, gulose, fructose and sorbose on a Sb-modified Pt anode in a Bi-saturated pH 3 solution [85]. Although high selectivity to a single product was not obtained, selectivity to value-added products was shown to be influenced by potential. The researchers suggest that more studies might uncover a new pathway to electrochemically convert glucose to fructose through sorbitol. Furfural (C 5 H 4 O 2 ) is one of the oldest biomass-derived chemicals which can undergo electrochemical hydrogenation to form 5-HMF [86,87]. Recent works have also demonstrated anodic electroreforming of furfural to maleic acid, which is widely used in the synthesis of resins and pharmaceuticals. In 2018, Kubota and Choi used PbO 2 electrodes in acidic media with a pH of 1 and found that the onset current decreased from 1.85 to 1.6 V upon the addition of 10 mM furfural [88]. Furfural was first oxidized to 2-furanol and finally to maleic acid with a yield of 65.1%. In 2020, Roman et al. also reported anodic reforming of furfural to furoic acid, a chemical to produce 2,5-furandicarboxylic acid [89]. A Faradaic efficiency of 96% was attained for furoic acid on Au electrodes at 0.8 V and a pH of 0.6, although low current densities (<30µA/cm 2 ) suggest the rate of reaction might be slower than desired. Notably, through density functional theory and attenuated total reflectance surface-enhanced infrared absorption spectroscopy (ATR-SEIRAS) studies, the desorption of furoate from electrode surface was suggested as the rate-limiting step at Au and Pt electrodes. This hypothesis was validated using further CV characterizations of furfural oxidation in the presence and absence of furoic acid. Small quantities (1%) of furoic acid were sufficient to inhibit furfural oxidation in acidic conditions. Studies on electroreforming of cellulose and its derivatives as well as other biomass derivatives at the anode are summarized in Table 1 with the key technical information. Electroreforming of Biomass at the Cathode In addition to oxidation at the anode, hydrogenation or reduction can also be conducted at the cathode of electrolytic cells. In hydrogenation, H + ions in the solution are reduced to surface-bound atomic hydrogen. Oxygenated organic molecules can react with adsorbed hydrogen to form valuable products. This section details some case studies of electroreforming biomass-derived compounds via cathodic reactions. Cellulose In 2014, Yang et al. investigated the electroreduction of cellulose oligosaccharides into glucose in acidic media [90]. Short chain oligosaccharides were first produced via hydrothermal treatment with an acidic catalyst. The authors varied the pH, applied voltage, electrolysis duration, and electrode preparation to optimize glucose yield. With a 5% MnO 2 /graphite/polytetrafluoroethylene (PTFE) cathode calcinated at 500 • C for 3 h, a glucose yield of 72.4% with 100% selectivity was reported under optimal electrolysis conditions (pH of 3, 8-h reaction duration, and potential of −0.58 V vs. RHE). Figure 15a,b depicted the product analysis by HPLC before and after electroreforming. Electroreforming of Biomass at the Cathode In addition to oxidation at the anode, hydrogenation or reduction can also be conducted at the cathode of electrolytic cells. In hydrogenation, H + ions in the solution are reduced to surface-bound atomic hydrogen. Oxygenated organic molecules can react with adsorbed hydrogen to form valuable products. This section details some case studies of electroreforming biomass-derived compounds via cathodic reactions. Cellulose In 2014, Yang et al. investigated the electroreduction of cellulose oligosaccharides into glucose in acidic media [90]. Short chain oligosaccharides were first produced via hydrothermal treatment with an acidic catalyst. The authors varied the pH, applied voltage, electrolysis duration, and electrode preparation to optimize glucose yield. With a 5% MnO2/graphite/polytetrafluoroethylene (PTFE) cathode calcinated at 500 °C for 3 h, a glucose yield of 72.4% with 100% selectivity was reported under optimal electrolysis conditions (pH of 3, 8-h reaction duration, and potential of −0.58 V vs. RHE). Figure 15a,b depicted the product analysis by HPLC before and after electroreforming. Similar to the electrooxidation mechanism with a gold electrode proposed by Sugano's group [35], Yang et al. hypothesized that cellulose would first adsorb onto the surface of a MnO 2 /graphite/PTFE cathode, as shown in Figure 15c. Mn (VI) (in MnO 2 ) would be reduced to Mn (III) (in MnOOH) after reaction with a H + ion and electron. Afterwards, MnOOH coordinates with oxygen in the glycosidic bond, depolymerizing the cellulose chain. Lastly, MnOOH is re-oxidized into MnO 2 in the acidic medium. Figure 16. Products such as 2,5-dimethylfuran (DMF) are considered potential gasoline alternatives, due to their high carbon and energy density [66,[91][92][93]. 2,5-dihydroxymethylfuran (DHMF) is a platform chemical for polyester and polyurethane foam production [94]. Conventional hydrogenation of 5-HMF often requires high temperatures and/or pressures, as well as hydrogen atmosphere [95]. Electrochemical hydrogenation represents an attractive alternative to produce these chemicals without requiring harsh conditions. would be reduced to Mn (III) (in MnOOH) after reaction with a H + ion and electron. Afterwards, MnOOH coordinates with oxygen in the glycosidic bond, depolymerizing the cellulose chain. Lastly, MnOOH is re-oxidized into MnO2 in the acidic medium. Figure 16. Products such as 2,5-dimethylfuran (DMF) are considered potential gasoline alternatives, due to their high carbon and energy density [66,[91][92][93]. 2,5-dihydroxymethylfuran (DHMF) is a platform chemical for polyester and polyurethane foam production [94]. Conventional hydrogenation of 5-HMF often requires high temperatures and/or pressures, as well as hydrogen atmosphere [95]. Electrochemical hydrogenation represents an attractive alternative to produce these chemicals without requiring harsh conditions. In 2013, Nilges and Schroder first demonstrated electrochemical hydrogenation of 5-HMF to DMF [87]. At a constant 10 mA/cm 2 in a 0.5 M sulfuric acid electrolyte with Cu electrodes, the highest DMF selectivity of 35.6% was attained. As reaction proceeded, a significant decrease in Faradaic efficiency was observed alongside a decrease in 5-HMF concentration in the solution. The authors proposed that a flow reactor would allow for sparingly soluble DMF products to be continuously removed, therefore maintaining high Faradaic efficiencies for 5-HMF electroreforming. 5-HMF can undergo hydrogenation to form valued products, as shown in Kwon et al. explored electrocatalytic hydrogenation of 5-HMF using different metal catalysts in neutral media (0.1 M sodium sulfate buffer, pH ~7.2) in the presence and absence of glucose [91]. Based on the similar onset potentials (~−0.5 V) observed for all metals, they found that the rate of electrocatalytic hydrogenation is not strongly influenced by the catalyst. However, the choice of catalyst was found to influence hydrogenation pathway and products. Broadly, the metal catalysts used were classified into three groups depending on products obtained in neutral media. Firstly, Fe, Ni, silver (Ag), zinc (Zn), cadmium (Cd), and indium (In) formed DHMF as the major product. Secondly, hydrogenolysis products of 5-HMF were mainly formed on Co, Au, Cu, tin (Sn), and antimony In 2013, Nilges and Schroder first demonstrated electrochemical hydrogenation of 5-HMF to DMF [87]. At a constant 10 mA/cm 2 in a 0.5 M sulfuric acid electrolyte with Cu electrodes, the highest DMF selectivity of 35.6% was attained. As reaction proceeded, a significant decrease in Faradaic efficiency was observed alongside a decrease in 5-HMF concentration in the solution. The authors proposed that a flow reactor would allow for sparingly soluble DMF products to be continuously removed, therefore maintaining high Faradaic efficiencies for 5-HMF electroreforming. Kwon et al. explored electrocatalytic hydrogenation of 5-HMF using different metal catalysts in neutral media (0.1 M sodium sulfate buffer, pH~7.2) in the presence and absence of glucose [91]. Based on the similar onset potentials (~−0.5 V) observed for all metals, they found that the rate of electrocatalytic hydrogenation is not strongly influenced by the catalyst. However, the choice of catalyst was found to influence hydrogenation pathway and products. Broadly, the metal catalysts used were classified into three groups depending on products obtained in neutral media. Firstly, Fe, Ni, silver (Ag), zinc (Zn), cadmium (Cd), and indium (In) formed DHMF as the major product. Secondly, hydrogenolysis products of 5-HMF were mainly formed on Co, Au, Cu, tin (Sn), and antimony (Sb). Finally, using palladium (Pd), Al, bismuth (Bi), and Pb could form either DHMF or other hydrogenolysis products by controlling the applied potential. Upon the addition of glucose, Zn, Cd, In, Fe, Ni, Ag, Co, and Au electrocatalysts favored the hydrogenation of 5-HMF into DHMF, while no such effects of initial glucose concentration on final product preference were observed using Bi, Pb, Sn or Sb. Similarly, different metal electrocatalysts were tested in acidic media (0.5 M H 2 SO 4 , pH of 0.3) [92], and classified into three groups according to major products. In acidic media, Fe, Ni, Cu, and Pb formed DHMF as the major product. 2,5-dimethyl-2,3 dihydrofuran (DMDHF) was the major product using Pd, Pt, Al, Zn, In, and Sb. Using Co, Ag, Au, Cd, Sb, and Bi formed either DHMF or DMDHF as the major product by controlling the applied potential. In 2019, Chadderdon et al. performed the paired electrocatalytic hydrogenation of 5-HMF at the cathode to produce 2,5-bis(hydroxymethyl)furan (BHMF) [96]. The hydrogenation reaction was catalyzed by Ag nanoparticles on carbon support in 0.5 M borate buffer solution of pH 9.2. Electrooxidation of 5-HMF at the anode was performed with homogenous 4-acetamido-TEMPO catalysts. Pairing these anodic and cathodic electroreforming reactions achieved co-generation of valuable products. 5-HMF hydrogenation products were found to be dependent on cathodic potential and 5-HMF concentration, as shown in Figure 17a,b. Increasing the potential or 5-HMF concentration shifted selectivity towards 5,5-bis(hydroxymethyl)hydro-furoin (BHH) rather than BHMF. At the optimum conditions of −0.46 V (vs. RHE) and a 5-HMF concentration of 5 mM, 5-HMF conversion reached 19.7%, while the selectivity and Faradaic efficiency of BHMF production were highest, at 80.9% and 89.3%, respectively. (Sb). Finally, using palladium (Pd), Al, bismuth (Bi), and Pb could form either DHMF or other hydrogenolysis products by controlling the applied potential. Upon the addition of glucose, Zn, Cd, In, Fe, Ni, Ag, Co, and Au electrocatalysts favored the hydrogenation of 5-HMF into DHMF, while no such effects of initial glucose concentration on final product preference were observed using Bi, Pb, Sn or Sb. Similarly, different metal electrocatalysts were tested in acidic media (0.5 M H2SO4, pH of 0.3) [92], and classified into three groups according to major products. In acidic media, Fe, Ni, Cu, and Pb formed DHMF as the major product. 2,5-dimethyl-2,3 dihydrofuran (DMDHF) was the major product using Pd, Pt, Al, Zn, In, and Sb. Using Co, Ag, Au, Cd, Sb, and Bi formed either DHMF or DMDHF as the major product by controlling the applied potential. In 2019, Chadderdon et al. performed the paired electrocatalytic hydrogenation of 5-HMF at the cathode to produce 2,5-bis(hydroxymethyl)furan (BHMF) [96]. The hydrogenation reaction was catalyzed by Ag nanoparticles on carbon support in 0.5 M borate buffer solution of pH 9.2. Electrooxidation of 5-HMF at the anode was performed with homogenous 4-acetamido-TEMPO catalysts. Pairing these anodic and cathodic electroreforming reactions achieved co-generation of valuable products. 5-HMF hydrogenation products were found to be dependent on cathodic potential and 5-HMF concentration, as shown in Figure 17a,b. Increasing the potential or 5-HMF concentration shifted selectivity towards 5,5-bis(hydroxymethyl)hydro-furoin (BHH) rather than BHMF. At the optimum conditions of −0.46 V (vs. RHE) and a 5-HMF concentration of 5 mM, 5-HMF conversion reached 19.7%, while the selectivity and Faradaic efficiency of BHMF production were highest, at 80.9% and 89.3%, respectively. Zhang et al. investigated acidic media for the electrocatalytic hydrogenation of 5-HMF into DMF [97]. Specifically, a bimetallic CuNi electrode (composed of 82% Cu, 17% Ni and trace O2) was synthesized. After that, LSV scans in 0.2 M sulfate buffer (pH of 2) were performed. The scans showed a considerable decrease in the magnitude of potential required for 5 mA/cm 2 when 2 g/L (15.9 mM) 5-HMF was added (−0.6 V vs. −0.36 V), suggesting good electrocatalytic activity for 5-HMF reduction. When the anodic potential of −0.46 V (vs. RHE) was maintained for 70 s, a maximum selectivity to DMF of 91.1% and a Faradaic efficiency of 84.6% were obtained. Under these conditions, however, the conversion of 5-HMF was low, at 37.8%. In 2021, Liu et al. conducted electrocatalytic hydrogenation of 5-HMF with Ag foil and oxide-derived Ag (OD-Ag) electrodes [98]. These reactions occurred in 0.5 M borate Zhang et al. investigated acidic media for the electrocatalytic hydrogenation of 5-HMF into DMF [97]. Specifically, a bimetallic CuNi electrode (composed of 82% Cu, 17% Ni and trace O 2 ) was synthesized. After that, LSV scans in 0.2 M sulfate buffer (pH of 2) were performed. The scans showed a considerable decrease in the magnitude of potential required for 5 mA/cm 2 when 2 g/L (15.9 mM) 5-HMF was added (−0.6 V vs. −0.36 V), suggesting good electrocatalytic activity for 5-HMF reduction. When the anodic potential of −0.46 V (vs. RHE) was maintained for 70 s, a maximum selectivity to DMF of 91.1% and a Faradaic efficiency of 84.6% were obtained. Under these conditions, however, the conversion of 5-HMF was low, at 37.8%. In 2021, Liu et al. conducted electrocatalytic hydrogenation of 5-HMF with Ag foil and oxide-derived Ag (OD-Ag) electrodes [98]. These reactions occurred in 0.5 M borate buffer at pH of 9.2 and 20 mM of 5-HMF. Figure 18a,b demonstrates the electrolysis setup that was performed in an H-cell and the-electrode flow cell, respectively. At cathodic potential of −0.51 V (vs. RHE), 5-HMF conversion and selectivity to 2,5-bis(hydroxymethyl)furan (BHMF) were higher with OD-Ag electrocatalysts instead of Ag foil, and in the flow cell rather than the H-configuration cell. Consequently, this led to the highest 5-HMF conversion of almost 30%, and a selectivity to BHMF of 95.3% (Figure 18c). buffer at pH of 9.2 and 20 mM of 5-HMF. Figure 18a,b demonstrates the electrolysis setup that was performed in an H-cell and the-electrode flow cell, respectively. At cathodic potential of −0.51 V (vs. RHE), 5-HMF conversion and selectivity to 2,5-bis(hydroxymethyl)furan (BHMF) were higher with OD-Ag electrocatalysts instead of Ag foil, and in the flow cell rather than the H-configuration cell. Consequently, this led to the highest 5-HMF conversion of almost 30%, and a selectivity to BHMF of 95.3% (Figure 18c). Likewise, using OD-Ag in a flow cell, cathodic hydrogenation of 5-HMF into BHMF was coupled with TEMPO electromediated oxidation of 5-HMF into FDCA at a platinum anode. Overall, the cell energy efficiency of the flow cell was higher than the H-cell, i.e., 24.5% compared to 5.7%. The resultant selectivity to BHMF was consistently around 90%, accompanied by complete selectivity to FDCA. Levulinic Acid Combustion of biofuels could result in zero net carbon release into the atmosphere, representing a greener mode of energy production [99,100]. For instance, one example of a biofuel is octane, which can be produced from levulinic acid. The traditional reforming of levulinic acids calls for elevated temperatures and pressures (250-400 °C and 10-35 bar), which require significant energy resources to sustain [101][102][103]. Electroreforming levulinic acid might be an attractive alternative for synthesising hydrocarbons for energy generation. This is usually performed at the cathode (reduction) under acidic media, and consists of two steps: the Kolbe reaction and electrocatalytic hydrogenation (ECH), as shown in Figure 19. Several such studies will be explored in this section. Likewise, using OD-Ag in a flow cell, cathodic hydrogenation of 5-HMF into BHMF was coupled with TEMPO electromediated oxidation of 5-HMF into FDCA at a platinum anode. Overall, the cell energy efficiency of the flow cell was higher than the H-cell, i.e., 24.5% compared to 5.7%. The resultant selectivity to BHMF was consistently around 90%, accompanied by complete selectivity to FDCA. Levulinic Acid Combustion of biofuels could result in zero net carbon release into the atmosphere, representing a greener mode of energy production [99,100]. For instance, one example of a biofuel is octane, which can be produced from levulinic acid. The traditional reforming of levulinic acids calls for elevated temperatures and pressures (250-400 • C and 10-35 bar), which require significant energy resources to sustain [101][102][103]. Electroreforming levulinic acid might be an attractive alternative for synthesising hydrocarbons for energy generation. This is usually performed at the cathode (reduction) under acidic media, and consists of two steps: the Kolbe reaction and electrocatalytic hydrogenation (ECH), as shown in Figure 19. Several such studies will be explored in this section. conversion of almost 30%, and a selectivity to BHMF of 95.3% (Figure 18c). Likewise, using OD-Ag in a flow cell, cathodic hydrogenation of 5-HMF into BHM was coupled with TEMPO electromediated oxidation of 5-HMF into FDCA at a platinu anode. Overall, the cell energy efficiency of the flow cell was higher than the H-cell, i 24.5% compared to 5.7%. The resultant selectivity to BHMF was consistently around 90 accompanied by complete selectivity to FDCA. Levulinic Acid Combustion of biofuels could result in zero net carbon release into the atmosphe representing a greener mode of energy production [99,100]. For instance, one example a biofuel is octane, which can be produced from levulinic acid. The traditional reformi of levulinic acids calls for elevated temperatures and pressures (250-400 °C and 10bar), which require significant energy resources to sustain [101][102][103]. Electroreformi levulinic acid might be an attractive alternative for synthesising hydrocarbons for ener generation. This is usually performed at the cathode (reduction) under acidic media, a consists of two steps: the Kolbe reaction and electrocatalytic hydrogenation (ECH), shown in Figure 19. Several such studies will be explored in this section. In 2012, Nilges et al. first performed ECH of levulinic acid to valeric acid with lead cathode [77]. Valeric acid was then converted to octane via the Kolbe reaction with a Pt cathode. Intially, ECH was performed in 0.5 M sulfuric acid (pH of 1) and 0.1 M levulinic acid at a fixed −1.405 V vs. RHE, with a current density of 20-40 mA/cm 2 . With a Pb electrode, Faradaic efficiency of 27% and selectivity to valeric acid of 97.2% were achieved. Subsequently, for the Kolbe step, water and methanol as solvents were compared. Overall, water resulted in better activity, with 40-50 mA/cm 2 at 3.895 V, achieving selectivity of 51.6% and Faradaic efficiency of 66.5%. In addition, easier extraction of water insoluble products, of which, at 1 M valeric acid and pH of 5.5, 72% octane selectivity was achieved. In 2013, Xin et al. studied the electroreforming of levulinic acid to valeric acid and g-valerolactone [104]. Identical to valeric acid, g-valerolactone is an essential precursor to biofuel [105] or can be blended into gasoline directly [106]. The authors compared CVs of Cu and Pb electrodes at pH of 0, and found that the onset potential of Cu was of lower magnitude than that of Pb (−0.4 V vs. −1.1 V). However, upon adding 0.2 M levulinic acid, onset potential of Pb increased by 0.2 V while Cu displayed little change, suggesting that adsorption of levulinic acid on Cu was suppressed by fast hydrogen evolution reaction (HER). At low overpotentials (−1.1 V) with a Pb electrode, conversion of 1.2% and selectivities towards valeric acid and g-valerolactone of 81.5% and 18.5%, respectively, were attained. In contrast, higher overpotentials (−1.5 V) led to conversions of 20.3% with selectivity of 97% to valeric acid. Additionally, the effects of pH were studied by contrasting CVs in 0.5 M sulfuric acid (pH of 0) and phosphate buffer (pH of 7.5). In an acidic medium, this resulted in 94% selectivity to valeric acid with 12.7% conversion and 84% Faradaic efficiency. The opposite behavior in neutral medium was observed with 100% selectivity to g-valerolactone, although conversion and Faradaic efficiency were low (1.3% and 6.2%, respectively). Xin's and coworkers then constructed a flow cell for continual electrolysis with applied potential fixed at −1.3 V (Figure 20a). When operating in the flow cell reactor, higher efficiencies and conversion were recorded accordingly in Figure 20b,c. This was likely due to the high flow rate, which addressed the mass transport issues. Stability tests were conducted across a 20 h reaction with 0.2 M levulinic acid. Conversion rates were consistent and no Pb leaching was observed, although the efficiency of Faradaic processes decreased over time. In 2012, Nilges et al. first performed ECH of levulinic acid to valeric acid with lead cathode [77]. Valeric acid was then converted to octane via the Kolbe reaction with a Pt cathode. Intially, ECH was performed in 0.5 M sulfuric acid (pH of 1) and 0.1 M levulinic acid at a fixed −1.405 V vs. RHE, with a current density of 20-40 mA/cm 2 . With a Pb electrode, Faradaic efficiency of 27% and selectivity to valeric acid of 97.2% were achieved. Subsequently, for the Kolbe step, water and methanol as solvents were compared. Overall, water resulted in better activity, with 40-50 mA/cm 2 at 3.895 V, achieving selectivity of 51.6% and Faradaic efficiency of 66.5%. In addition, easier extraction of water insoluble products, of which, at 1 M valeric acid and pH of 5.5, 72% octane selectivity was achieved. In 2013, Xin et al. studied the electroreforming of levulinic acid to valeric acid and gvalerolactone [104]. Identical to valeric acid, g-valerolactone is an essential precursor to biofuel [105] or can be blended into gasoline directly [106]. The authors compared CVs of Cu and Pb electrodes at pH of 0, and found that the onset potential of Cu was of lower magnitude than that of Pb (−0.4 V vs. −1.1 V). However, upon adding 0.2 M levulinic acid, onset potential of Pb increased by 0.2 V while Cu displayed little change, suggesting that adsorption of levulinic acid on Cu was suppressed by fast hydrogen evolution reaction (HER). At low overpotentials (−1.1 V) with a Pb electrode, conversion of 1.2% and selectivities towards valeric acid and g-valerolactone of 81.5% and 18.5%, respectively, were attained. In contrast, higher overpotentials (−1.5 V) led to conversions of 20.3% with selectivity of 97% to valeric acid. Additionally, the effects of pH were studied by contrasting CVs in 0.5 M sulfuric acid (pH of 0) and phosphate buffer (pH of 7.5). In an acidic medium, this resulted in 94% selectivity to valeric acid with 12.7% conversion and 84% Faradaic efficiency. The opposite behavior in neutral medium was observed with 100% selectivity to g-valerolactone, although conversion and Faradaic efficiency were low (1.3% and 6.2%, respectively). Xin's and coworkers then constructed a flow cell for continual electrolysis with applied potential fixed at −1.3 V (Figure 20a). When operating in the flow cell reactor, higher efficiencies and conversion were recorded accordingly in Figure 20b,c. This was likely due to the high flow rate, which addressed the mass transport issues. Stability tests were conducted across a 20 h reaction with 0.2 M levulinic acid. Conversion rates were consistent and no Pb leaching was observed, although the efficiency of Faradaic processes decreased over time. In 2015, Dos Santos et al. performed numerous electroreductions of levulinic acid with various electrode materials (C, Cu, Fe, Ni, Pb) and pH (acid, alkali or neutral), and compared their products and yields [76]. All ECH was performed at cathodic potentials of −1.8 V. At a pH of 0, Pb resulted in the best yield of valeric acid, with over 70% conversion and 80% selectivity to valeric acid. This is likely due to Pb's high overpotential for hydrogen evolution, which is the competing reaction at the cathode. On the other hand, In 2015, Dos Santos et al. performed numerous electroreductions of levulinic acid with various electrode materials (C, Cu, Fe, Ni, Pb) and pH (acid, alkali or neutral), and compared their products and yields [76]. All ECH was performed at cathodic potentials of −1.8 V. At a pH of 0, Pb resulted in the best yield of valeric acid, with over 70% conversion and 80% selectivity to valeric acid. This is likely due to Pb's high overpotential for hydrogen evolution, which is the competing reaction at the cathode. On the other hand, using carbon in acidic conditions, or Fe in alkali conditions (pH of 14) results in g-valerolactone as the major product, with 40% conversion and selectivities around 70%. In 2020, Du et al. compared different electrocatalyst materials (Pt, Pb, Zn, Ti, Co and Cu) for ECH of levulinic acid to valeric acid [107]. Overall, Pb showed the best balance of conversion, selectivity, and Faradaic efficiency. Generally, decreasing the applied potential led to higher conversion and selectivity but lower Faradaic efficiency. While reducing pH led to higher conversion of levulinic acid, it also caused the Faradaic efficiency to decline. As more hydrogen adsorbed on the surface, hydrogen gas formation (via Tafel step or Heyrovsky step) was enhanced. Further LSV scans with different concentration of levulinic acid (0 to 0.5 M) in 0.5 M sulfuric acid showed ECH of levulinic acid was favored over hydrogen evolution with levulinic acid concentrations above 0.1 M. This led to a potential of −1.15 V at a current density of −0.1 mA/cm 2 . With elevated temperatures, the overall conversion and rate were increased, and the authors concluded that optimal conditions for ECH of levulinic acid to valeric acid with Pb cathode were 0.5 M sulfuric acid with 0.2 M levulinic acid, at −1.60 V and 65 • C over 4 h. Additional stability tests were conducted at room temperature with the mentioned optimal condition, across eight 4-h cycles. Consequently, the selectivity towards valeric acid, Faradaic efficiency, and conversion remained high at 90%, 94% and 48%, respectively. To support the development of electroreforming on a larger scale, Kurig et al. studied the production of 2,7-octanedione from levulinic acid in a continuous flow single pass reactor [108]. Their setup involved Pt electrodes in 1 M levulinic acid and 0.1 M KOH as the supporting electrolyte. The optimal residence time (volume of reactor/flow rate) was found to be 36 min, resulting in levulinic acid conversion of 48% and selectivity of 52%. Unfortunately, even under optimal conditions, the performance of the single pass reactor was poorer than that using a semi batch cell, which could achieve 100% conversion and 75% yield. The authors proposed that, for Kolbe electrolysis, localization of radicals was important to promote decarboxylation. Studies on electroreforming of cellulose and its derivatives at the cathode are summarized in Table 2 with the key technical information. Evolution of Hydrogen Coupled with Biomass Electroreforming Owing to the ever-increasing demand for green hydrogen, the decoupling of HER and OER by replacing OER with biomass oxidation has attracted intensive attention recently. Thus, the generation of byproducts of biomass electrooxidation, i.e., green hydrogen, has been intentionally optimized in addition to the enhancement of biomass electrooxidation. Glucose Electrooxidation Coupled with Green Hydrogen Generation Using glucose electrooxidation instead of OER to generate green hydrogen for the first time, Du et al. presented the results of their research in 2017 [109]. In their work, iron phosphide films were prepared in situ on stainless steel mesh and used as anodes. The anodic potentials (vs. RHE) to operate at the current density of 10 mA/cm 2 without and with glucose (of 0.5 M concentration) were 1.52 and 1.22 V, respectively, as compared in Figure 21a. No effervescence was observed at the anode, suggesting glucose oxidation completely replaced OER. At a fixed potential of 1.9 V, the hydrogen production rate that coupled to glucose oxidation was higher than that for regular water electrolysis where hydrogen production was coupled to OER, as seen in Figure 21b. Evolution of Hydrogen Coupled with Biomass Electroreforming Owing to the ever-increasing demand for green hydrogen, the decoupling of HER and OER by replacing OER with biomass oxidation has attracted intensive attention recently. Thus, the generation of byproducts of biomass electrooxidation, i.e., green hydrogen, has been intentionally optimized in addition to the enhancement of biomass electrooxidation. Glucose Electrooxidation Coupled with Green Hydrogen Generation Using glucose electrooxidation instead of OER to generate green hydrogen for the first time, Du et al. presented the results of their research in 2017 [109]. In their work, iron phosphide films were prepared in situ on stainless steel mesh and used as anodes. The anodic potentials (vs. RHE) to operate at the current density of 10 mA/cm 2 without and with glucose (of 0.5 M concentration) were 1.52 and 1.22 V, respectively, as compared in Figure 21a. No effervescence was observed at the anode, suggesting glucose oxidation completely replaced OER. At a fixed potential of 1.9 V, the hydrogen production rate that coupled to glucose oxidation was higher than that for regular water electrolysis where hydrogen production was coupled to OER, as seen in Figure 21b. In 2019, Rafaïdeen et al. designed Pd-Au nanoparticles on carbon for glucose and xylose oxidation [110] (Figure 22a). The optimal ratio of Pd to Au composition was found to be 3:7. In 0.1 M NaOH (pH of 13) and 0.1 M glucose, CV revealed the potential required for 1 mA/cm 2 was 0.2 V, with peak current density (4 mA/cm 2 ) at 0.5 V. Notably, the conversion of glucose was 67%, and the selectivity to gluconic acid was 87%. Hydrogen production was observed at Pt/C cathode without further characterization. Moreover, the effects of glucose concentration and potentials on the reaction were investigated [111]. Increasing glucose concentration beyond 0.1 M was found to result in lower Faradaic efficiency and chemical yields, suggesting surface poisoning. Unsurprisingly, rate of gluconic acid production was higher with higher voltage, although Faradaic efficiency decreased. The production of hydrogen was estimated by calculating charge transfer to form the measured gluconic acid. It was proposed that, theoretically, 1 ton of 0.1 M glucose could produce 18.47 kg of hydrogen at 0.6 V with the setup, consuming 297 kWh electricity. However, it should be noted that this assumes no degradation of electrodes. In 2020, Lin et al. investigated the application of Co-Ni alloy electrocatalysts on carbon cloth [112] (Figure 22b). SEM imaging showed significant macropore distribution, and In 2019, Rafaïdeen et al. designed Pd-Au nanoparticles on carbon for glucose and xylose oxidation [110] (Figure 22a). The optimal ratio of Pd to Au composition was found to be 3:7. In 0.1 M NaOH (pH of 13) and 0.1 M glucose, CV revealed the potential required for 1 mA/cm 2 was 0.2 V, with peak current density (4 mA/cm 2 ) at 0.5 V. Notably, the conversion of glucose was 67%, and the selectivity to gluconic acid was 87%. Hydrogen production was observed at Pt/C cathode without further characterization. Moreover, the effects of glucose concentration and potentials on the reaction were investigated [111]. Increasing glucose concentration beyond 0.1 M was found to result in lower Faradaic efficiency and chemical yields, suggesting surface poisoning. Unsurprisingly, rate of gluconic acid production was higher with higher voltage, although Faradaic efficiency decreased. The production of hydrogen was estimated by calculating charge transfer to form the measured gluconic acid. It was proposed that, theoretically, 1 ton of 0.1 M glucose could produce 18.47 kg of hydrogen at 0.6 V with the setup, consuming 297 kWh electricity. However, it should be noted that this assumes no degradation of electrodes. Zheng et al. used iron-doped cobalt diselenide nanowires on conductive carbon cloth (Fe0.1-CoSe2/CC) as an alkaline anode and acidic cathode to produce gluconate (salt of gluconic acid) and hydrogen, respectively [115] (Figure 23a). For a glucose oxidation reaction in 1 M KOH, LSV scans reveal that a Fe0.1-CoSe2/CC electrode with and without 0.5 M glucose necessitated voltages of 1.65 V and 1.12 V, respectively, as shown in Figure 23b. With glucose, no bubbles were observed at the anode, suggesting complete suppression of OER. Moreover, the chronopotentiometric scans showed stable potential responses, signifying stable mass transport properties. The stability of the electrode was confirmed by XRD and morphological scans taken before and after 8 h of electrolysis at 1.15 V, showing no sign of differences. The authors also analyzed cathodic hydrogen evolution using Fe0.1-CoSe2/CC in 0.5 M H2SO4, and found that overpotential of 270 mV was required to reach a current density of 100 mA/cm 2 (Figure 23c). Similar tests were conducted to confirm catalytic activity and electrode stability. A two-electrode cell was constructed with a bipolar membrane separating 1 M KOH anolyte and 0.5 M H2SO4 catholyte. To reach 10 mA/cm 2 , the cell potential in the absence of glucose was applied at 1.34 V, which decreased to 0.72 V with the addition of 0.5 M glucose. For generating green hydrogen, the cell was maintained at 10 mA/cm 2 , with 1 M KOH, 0.5 M glucose at the anode and 0.5 M H2SO4 at the cathode. [112]. Copyright 2020, Journal of Alloys and Compounds. (c) SEM of Co 0.5 Ni 0.5 (OH) 2, reprinted with permission from Ref. [113]. Copyright 2020, Journal of Electroanalytical Chemistry. (d) SEM of Ni-MoS 2, reprinted with permission from Ref. [114]. Copyright 2020, International Journal of Hydrogen Energy. In 2020, Lin et al. investigated the application of Co-Ni alloy electrocatalysts on carbon cloth [112] (Figure 22b). SEM imaging showed significant macropore distribution, and X-ray photoelectron spectroscopy characterizations suggested that the partial oxidation of alloy occurred. Additional LSV determined the potential for 10 mA/cm 2 on Co-Ni alloy electrode to be 1.096 V in 1 M KOH with 0.1 M glucose, which was less than those on the bulk Co (1.143 V) or Ni (1.138 V) electrodes. Further electrochemical impedance spectroscopy (EIS) measurement and Tafel slope comparisons among Co, Ni and Co-Ni alloy electrodes showed the superior conductivity and kinetics of the alloy electrode. In a two-electrode cell with Co-Ni alloy, both electrodes achieved 10 mA/cm 2 at a voltage of only 1.39 V in 1 M KOH with 0.1 M glucose. Lin and coworkers also used a customized cobalt nickel hydroxide nanosheet (Co 0.5 Ni 0.5 (OH) 2 NS) on carbon cloth as the anode, and replaced OER with glucose oxidation [113] (Figure 22c). Co 0.5 Ni 0.5 (OH) 2 NS electrode measured potential of 1.17 V at current density of 10 mA/cm 2 . After a 12 h reaction at constant current density, the potential increased slightly to 1.20 V, indicating superior electrode stability. In a two-electrode cell with Pt cathode, the required potentials for 10 and 100 mA/cm 2 in 1 M KOH only were 1.47 and 1.75 V, respectively. With the addition of 0.1 M glucose, the corresponding potentials decreased to 1.22 V and 1.56 V, respectively. Similarly, Liu et al. fabricated nickel-molybdenum disulfide (Ni-MoS 2 ) for both anode and cathode [114] (Figure 22d). LSV showed the potentials needed for 10 mA/cm 2 in 1 M KOH without and with 0.3 M glucose were 1.64 and 1.46 V, respectively. Additional Tafel slopes, EIS spectra, and double layer capacitance characterizations revealed preferable kinetics and catalytic activity of Ni-MoS 2 compared to MoS 2 and Pt/C, as well as good stability through 12 h chronoamperometric tests. Subsequently, a two-electrode cell was constructed with Ni-MoS 2 on carbon paper as both the anode and cathode, and 1.67 V was required to reach 10 mA/cm 2 . No bubbles were observed at the anode, suggesting complete suppression of OER. Zheng et al. used iron-doped cobalt diselenide nanowires on conductive carbon cloth (Fe 0.1 -CoSe 2 /CC) as an alkaline anode and acidic cathode to produce gluconate (salt of gluconic acid) and hydrogen, respectively [115] (Figure 23a). For a glucose oxidation reaction in 1 M KOH, LSV scans reveal that a Fe 0.1 -CoSe 2 /CC electrode with and without 0.5 M glucose necessitated voltages of 1.65 V and 1.12 V, respectively, as shown in Figure 23b. With glucose, no bubbles were observed at the anode, suggesting complete suppression of OER. Moreover, the chronopotentiometric scans showed stable potential responses, signifying stable mass transport properties. The stability of the electrode was confirmed by XRD and morphological scans taken before and after 8 h of electrolysis at 1.15 V, showing no sign of differences. Ding et al. further proposed that in order for hydrogen electrolysis to be truly green, electrodes should be part of a closed material cycle [116]. Interestingly, they developed carbon electrodes from biowaste to replace OER with carbon oxidation, intending for the carbon anode to be consumed in the process. Rather than electrolyzing glucose in solution, the authors fabricated carbon pellets (Figure 24a) by means of hydrothermal treatment of glucose, which were deposited onto glassy carbon electrodes. The two-electrode cell potential was fixed at 2.4 V, and the sacrificial carbon anode and Pt cathode were deployed. This anode was maintained at a pH of 13, and the carbon anode was oxidized to carbonate, allowing continuous hydrogen formation at the cathode. The test cell was left to run for 10 days, and the products were quantified (Figure 24b). Doping carbon pellets with nitrogen resulted in better anode stability without influencing electrode conductivity or hydrogen production, as seen in the higher H2 evolution after 10 days in Figure 24c compared to that without nitrogen doping in Figure 24b. The authors also analyzed cathodic hydrogen evolution using Fe 0.1 -CoSe 2 /CC in 0.5 M H 2 SO 4 , and found that overpotential of 270 mV was required to reach a current density of 100 mA/cm 2 (Figure 23c). Similar tests were conducted to confirm catalytic activity and electrode stability. A two-electrode cell was constructed with a bipolar membrane separating 1 M KOH anolyte and 0.5 M H 2 SO 4 catholyte. To reach 10 mA/cm 2 , the cell potential in the absence of glucose was applied at 1.34 V, which decreased to 0.72 V with the addition of 0.5 M glucose. For generating green hydrogen, the cell was maintained at 10 mA/cm 2 , with 1 M KOH, 0.5 M glucose at the anode and 0.5 M H 2 SO 4 at the cathode. Performing chronopotentiometric electrolysis for 100 min yielded 0.15 mmol of H 2 with 99% Faradaic efficiency. Ding et al. further proposed that in order for hydrogen electrolysis to be truly green, electrodes should be part of a closed material cycle [116]. Interestingly, they developed carbon electrodes from biowaste to replace OER with carbon oxidation, intending for the carbon anode to be consumed in the process. Rather than electrolyzing glucose in solution, the authors fabricated carbon pellets (Figure 24a) by means of hydrothermal treatment of glucose, which were deposited onto glassy carbon electrodes. The two-electrode cell potential was fixed at 2.4 V, and the sacrificial carbon anode and Pt cathode were deployed. This anode was maintained at a pH of 13, and the carbon anode was oxidized to carbonate, allowing continuous hydrogen formation at the cathode. The test cell was left to run for 10 days, and the products were quantified (Figure 24b). Doping carbon pellets with nitrogen resulted in better anode stability without influencing electrode conductivity or hydrogen production, as seen in the higher H 2 evolution after 10 days in Figure 24c compared to that without nitrogen doping in Figure 24b. carbon anode to be consumed in the process. Rather than electrolyzing glucose in solution, the authors fabricated carbon pellets (Figure 24a) by means of hydrothermal treatment of glucose, which were deposited onto glassy carbon electrodes. The two-electrode cell potential was fixed at 2.4 V, and the sacrificial carbon anode and Pt cathode were deployed. This anode was maintained at a pH of 13, and the carbon anode was oxidized to carbonate, allowing continuous hydrogen formation at the cathode. The test cell was left to run for 10 days, and the products were quantified (Figure 24b). Doping carbon pellets with nitrogen resulted in better anode stability without influencing electrode conductivity or hydrogen production, as seen in the higher H2 evolution after 10 days in Figure 24c compared to that without nitrogen doping in Figure 24b. 5-HMF Electrooxidation Coupled with Green Hydrogen Generation 5-HMF electrooxidation can also replace OER for safe green hydrogen generation. Yang et al. explored the electroreforming of 5-HMF and simultaneous hydrogen production [117] using Mo-doped nickel selenides on Ni foam (Mo-Ni0.85Se/NF). With Mo-Ni0.85Se/NF as both electrodes, in 1 M KOH, to achieve current density of 50 mA/cm 2 , adding 10 mM 5-HMF reduced the overall potential from 1.68 to 1.50 V. At an anodic potential of 1.4 V, complete conversion was observed, and Faradaic efficiency and selectivity were both high at >95%, while 3.8 mmol of hydrogen was produced with a Faradaic efficiency close to 100% at the cathode. The authors proposed Mo doping changed the d-band centre 5-HMF Electrooxidation Coupled with Green Hydrogen Generation 5-HMF electrooxidation can also replace OER for safe green hydrogen generation. Yang et al. explored the electroreforming of 5-HMF and simultaneous hydrogen production [117] using Mo-doped nickel selenides on Ni foam (Mo-Ni 0.85 Se/NF). With Mo-Ni 0.85 Se/NF as both electrodes, in 1 M KOH, to achieve current density of 50 mA/cm 2 , adding 10 mM 5-HMF reduced the overall potential from 1.68 to 1.50 V. At an anodic potential of 1.4 V, complete conversion was observed, and Faradaic efficiency and selectivity were both high at >95%, while 3.8 mmol of hydrogen was produced with a Faradaic efficiency close to 100% at the cathode. The authors proposed Mo doping changed the d-band centre of Ni and reduced the hydrogen adsorption energy on electrode surface, increasing electrocatalytic activity. Jiang et al. used Co-P catalysts on Cu foam for concurrent anodic oxidation of 5-HMF into FDCA and cathodic reduction of water to H 2 [118]. In 1 M KOH, anodic potential required for current density of 20 mA/cm 2 decreased from 1.53 to 1.38 V upon addition of 50 mM 5-HMF. From periodic HPLC analysis during anodic reaction process, suggesting that FDCA is obtained through DFF route (see Figure 10 for reaction pathways). The authors then employed a two-electrode cell for concurrent anodic 5-HMF oxidation and cathodic hydrogen evolution. In 1 M KOH and using Co-P/Cu foam electrodes, the overall potential required for 20 mA/cm 2 decreased from 1.59 to 1.44 V after adding 50 mM 5-HMF (Figure 25a). In this case, 100% 5-HMF conversion and 90% FDCA yield were measured at the anode, while 8 mmol of hydrogen gas was produced with 100% Faradaic efficiency at the cathode, as depicted in Figure 25b. thors then employed a two-electrode cell for concurrent anodic 5-HMF oxidation and cathodic hydrogen evolution. In 1 M KOH and using Co-P/Cu foam electrodes, the overall potential required for 20 mA/cm 2 decreased from 1.59 to 1.44 V after adding 50 mM 5-HMF (Figure 25a). In this case, 100% 5-HMF conversion and 90% FDCA yield were measured at the anode, while 8 mmol of hydrogen gas was produced with 100% Faradaic efficiency at the cathode, as depicted in Figure 25b. Combining Ni and vanadium oxides was found to enhance charge redistribution and weaken hydrogen adsorption on Ni, which would otherwise limit catalytic activity [119]. Thus, Liang et al. fabricated a nickel nitride-vanadium trioxide (Ni3N-V2O3) catalyst for 5-HMF oxidation and hydrogen evolution [120]. The cathodic performance of Ni3N-V2O3 was superior to only Ni3N or V2O3, and on par with Pt/C, as shown in Figure 26a. At the anode, Ni3N-V2O3 was similarly more active than Ni3N, and addition of 10 mM 5-HMF reduced overpotentials by about 0.14 V. Adding 5-HMF also caused the disappearance of oxidation peak of Ni 2+ to Ni 3+ before water splitting (Figure 26b). Experiments were performed using a two-electrode cell (both anode and cathode being Ni3N-V2O3) in 1 M KOH with 10 mM 5-HMF, at a fixed current of 10 mA/cm 2 with corresponding overall potential of about 1.4 V. FDCA yield of 96.1% and selectivity of 98.7%, and hydrogen Faradaic efficiency of over 90% were reported, as shown in Figure 26c,d. Combining Ni and vanadium oxides was found to enhance charge redistribution and weaken hydrogen adsorption on Ni, which would otherwise limit catalytic activity [119]. Thus, Liang et al. fabricated a nickel nitride-vanadium trioxide (Ni 3 N-V 2 O 3 ) catalyst for 5-HMF oxidation and hydrogen evolution [120]. The cathodic performance of Ni 3 N-V 2 O 3 was superior to only Ni 3 N or V 2 O 3 , and on par with Pt/C, as shown in Figure 26a. At the anode, Ni 3 N-V 2 O 3 was similarly more active than Ni 3 N, and addition of 10 mM 5-HMF reduced overpotentials by about 0.14 V. Adding 5-HMF also caused the disappearance of oxidation peak of Ni 2+ to Ni 3+ before water splitting (Figure 26b). Experiments were performed using a two-electrode cell (both anode and cathode being Ni 3 N-V 2 O 3 ) in 1 M KOH with 10 mM 5-HMF, at a fixed current of 10 mA/cm 2 with corresponding overall potential of about 1.4 V. FDCA yield of 96.1% and selectivity of 98.7%, and hydrogen Faradaic efficiency of over 90% were reported, as shown in Figure 26c,d. thodic hydrogen evolution. In 1 M KOH and using Co-P/Cu foam electrodes, the overall potential required for 20 mA/cm 2 decreased from 1.59 to 1.44 V after adding 50 mM 5-HMF (Figure 25a). In this case, 100% 5-HMF conversion and 90% FDCA yield were measured at the anode, while 8 mmol of hydrogen gas was produced with 100% Faradaic efficiency at the cathode, as depicted in Figure 25b. Combining Ni and vanadium oxides was found to enhance charge redistribution and weaken hydrogen adsorption on Ni, which would otherwise limit catalytic activity [119]. Thus, Liang et al. fabricated a nickel nitride-vanadium trioxide (Ni3N-V2O3) catalyst for 5-HMF oxidation and hydrogen evolution [120]. The cathodic performance of Ni3N-V2O3 was superior to only Ni3N or V2O3, and on par with Pt/C, as shown in Figure 26a. At the anode, Ni3N-V2O3 was similarly more active than Ni3N, and addition of 10 mM 5-HMF reduced overpotentials by about 0.14 V. Adding 5-HMF also caused the disappearance of oxidation peak of Ni 2+ to Ni 3+ before water splitting (Figure 26b). Experiments were performed using a two-electrode cell (both anode and cathode being Ni3N-V2O3) in 1 M KOH with 10 mM 5-HMF, at a fixed current of 10 mA/cm 2 with corresponding overall potential of about 1.4 V. FDCA yield of 96.1% and selectivity of 98.7%, and hydrogen Faradaic efficiency of over 90% were reported, as shown in Figure 26c,d. Studies on evolution of hydrogen coupled with biomass electroreforming are summarized in Table 3 with the key technical information. Conclusions and Outlook The electroreforming of biomass compounds represents a promising green and sustainable route for synthesizing value-added chemicals with minimum damage to the environment. Compared to thermochemical routes, the operating conditions of electroreforming route are usually milder, and product controllability is better through tuning electrolytic cell parameters such as pH and potential. When compared to the biochemical routes, electroreforming processes can be conducted in more compact devices and in much shorter durations. For electroreforming of cellulose derivatives (glucose, 5-HMF, levulinic acid), exciting progress has been made by using various metal electrocatalysts to improve products and yields. In particular, recent studies have demonstrated the usage of non-noble metals or bimetallic alloys as electrocatalysts for both oxidation and hydrogenation processes. These developments shift reliance away from expensive noble metals, potentially increasing the economic viability of electroreforming techniques. Studies of flow reactor cells [54,67,98,108], showcased the possible continual production they provide, and therefore their industrial scalability. Notably, a great advantage of the electrochemical route is the cogeneration of green hydrogen, which plays an indispensable role in decarbonization. From an energy saving prospective, both holes and electrons from electricity are utilized in such hybrid electrolysis, coupling biomass electrooxidation and water reduction, leading to valuable cathodic and anodic products. Despite these promising advantages of biomass electroreforming over state-of-the-art biomass valorization, there are challenges to tackle before large-scale implementation. Investigations into direct electroreforming of cellulose remains under-represented mainly due to the large polymer that could not be readily hydrolyzed in electrolyte. In order to make every stage in electroreforming pipeline green and sustainable, glucose and the other derivatives should be obtained from cellulose, so as not to compete with edible plant sources. At the time of writing, several studies have been published to uncover electrooxidation and depolymerization mechanisms of cellulose. A few studies have analyzed useful products from cellulose electrolysis [36]. Thus, more investigations into energy-efficient and cost-effective pre-treatment methods are needed for converting raw biomass polymers to smaller molecules that can be readily reformed by electrochemical process. To this end, the rational combination of mechanochemical and biological processes could potentially hold great promise. High-value products would offset the overall production cost, increasing the economic viability of the electroreforming route. Raw biomass consists of large polymers and selectively converting them to high-value products is challenging. Ideally, one would like to get all possible reaction pathways and products mapped out, and then study how to control the selectivity. Therefore, in situ/operando measurements such as FTIR and SERS-based optical methods are promising approaches to shed light on the detailed reaction mechanisms. Moreover, complementary theoretical models to predict the thermodynamics and kinetics of the reaction are critical for a complete understanding of the reactions involved in electroreforming. Nevertheless, atomic modeling of large polymer necessitates superior computing facilities and is very costly. As studies performed by Roman et al. have shown, in situ methods such as spectroscopy can be combined with simulations for density functional theory calculations to provide enhanced understanding of reaction mechanisms [89]. A rational combination of in situ/operando characterization and theoretical modeling could lead to time-and energy-efficient investigation without compromising accuracy. Studies were also mostly focused on exploring the feasibility of different advanced catalysts and electrodes. It is noted that most reported catalysts show superior activity but inferior stability, which is very crucial for practical use. Many lessons can be learnt from the development of water electrolysis across the full pH range. Alkaline water electrolysis is by far the cheapest and most scalable technique for green hydrogen generation. Despite the recent drastic reduction in its cost, PEM water electrolysis still suffers from poor scalability mainly due to its Pt-group catalysts, particularly its anodic catalyst based on iridium and ruthenium. Similar challenge faces biomass electroreforming in acidic media. To this end, strategies for stabilizing non-precious catalyst and decreasing load of precious catalysts for PEM water electrolysis can be implemented for acidic biomass electroreforming. Nevertheless, biomass electroreforming in alkaline media is still relatively more cost-effective and scalable. Electroreforming of biomass represents a greener route of electrosynthesis of chemicals. Despite its advantage of better sustainability, it is challenging to control the reaction pathways. Advanced catalyst design, e.g., tandem catalysts, could enrich the toolbox of pathways for biomass electroreforming. Lastly, in order to advance our collective understanding, forming a consistent benchmark on evaluating the efficacy of different designs will be critical. A standard protocol with critical parameters, such as potential, current density, and stability, for benchmarking is needed before one can compare the results across the literature. Such benchmarking will greatly facilitate the development of catalysts and electrodes. Funding: This work was funded by A*STAR Science and Engineering Research Council AME IRG funding (A1983c0029) and MOE Tier 1 (RG58_21).
2021-11-19T16:12:19.340Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "1543998e779540f69f4a98c572858a6924be65dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/12/11/1405/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85ab5a97f14754a9a1cefb9fec9a20b17b1c95f4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
990669
pes2o/s2orc
v3-fos-license
Cavity-free plasmonic nanolasing enabled by dispersionless stopped light When light is brought to a standstill, its interaction with gain media increases dramatically due to a singularity in the density of optical states. Concurrently, stopped light engenders an inherent and cavity-free feedback mechanism, similar in effect to the feedback that has been demonstrated and exploited in large-scale disordered media and random lasers. Here we study the spatial, temporal and spectral signatures of lasing in planar gain-enhanced nanoplasmonic structures at near-infrared frequencies and show that the stopped-light feedback mechanism allows for nanolasing without a cavity. We reveal that in the absence of cavity-induced feedback, the subwavelength lasing mode forms dynamically as a phase-locked superposition of quasi dispersion-free waveguide modes. This mechanism proves remarkably robust against interface roughness and offers a new route towards nanolasing, the experimental realization of ultra-thin surface emitting lasers, and cavity-free active quantum plasmonics. Supplementary Figures Average inversion ∆N and stimulated emission rate R stim in dependence of the gain section width w during steady-state lasing of the TM 2 mode. (b) Confinement factor Γ and energy confinement Γ E of the lasing mode to the gain section in dependence of w. The factorization parameter ζ is defined in the text. Semi-analytical results for Γ and Γ E , obtained from mode solver calculations, are represented by the dotted lines and fit well in the limit of large w. (c) Effective mode area calculated as A g /Γ from the confinement factor in (b) and the gain section area A g = w · 270 nm. (d) The spectral blue-shift of the lasing frequency with decreasing gain section width. See "Steady-state analysis for varying gain section width" in the supplementary discussion for further details. Γ and energy confinement Γ E in dependence of the gain section width during steady-state lasing of the TM 1 mode. The factorization parameter ζ is defined in the text. (c) Effective mode area A eff calculated as A g /Γ from the confinement factor in (b) and the gain section area A g = w · 90 nm. See "Steady-state analysis of plasmonic mode" in the supplementary discussion for further details. Supplementary Discussion Optical pumping. In-plane optical pumping of the gain medium is possible at frequencies where the dispersion curve supports propagating modes. Efficient in-coupling of radiation into the waveguide core, for example through a metal grating on top of the thin metal layer or through end-fire coupling, make it possible to optically deliver pump power to the SL lasing region from within the waveguide. In the planar metal-dielectric stack of Figure S2a of the main text, we can use a high-k mode of the TM 2 branch as the pump exploiting the positive curvature of the band's dispersion (see Figure 2b of the main text). For the frequency ω pump of the cw pump field we choose the absorption maximum of the gain medium at ω a /2π = 223.7 THz (see Supplementary Table 2) where k = 11.31 µm −1 and the group velocity is v g ≈ 0.16c. The results for steady-state SL lasing in a setup with 400 nm wide gain section are presented in Supplementary Fig. 1. The pump field enters from the negative xdirection of Supplementary Fig. 1a and, away from the gain section, displays the TM 2 mode profile with its two non-zero components E x (top) and E y (bottom). Interference with the lasing mode leads to a strong deviation from the pump mode profile close to and inside the gain section. The interference is particularly strong in the E x -component, which is the major component of the lasing field. Within this small gain material section of 400 nm width, only about 19% of the pump energy is absorbed. The field interference observed in Supplementary Fig. 1a can be decomposed into its spectral components by applying a spectral filtering to the steady-state field dynamics at the lasing frequency. Supplementary Figure 1b shows that the resulting mode profile is almost perfectly symmetric around the center of the gain section in propagation direction, which is also the case for the steady-state inversion displayed in Supplementary Fig. 1c. We can conclude that a constant, spatially-homogeneous pump rate r 0 p will give equivalent results to those presented here as long as only a small proportion of the pump field energy is absorbed within the gain medium. Density of optical states. The local density of optical states (LDOS) in lossless slow-light photonic systems is enhanced by a factor of 1/v g due to an effective prolongation of the interaction time between the emitter and the fields. This LDOS enhancement leads to a potentially dramatic speed-up of spontaneous emission and can result in strong nonlinear emitter-field interactions through an associated increase in the electric field strength. Here, we calculate the Purcell factor, i.e. the enhancement of the partial LDOS in comparison to its free-space value, for a dipole emitter positioned within the waveguide core and polarized to couple either to TM or TE modes of the planar metal-dielectric stack. Subsequent averaging over all positions within the waveguide core yields the average Purcell factor. This enhancement can be compared to 1 evaluated using the dispersion and field profiles of the waveguide modes. F k (ω) is normalized to the free-space value of the partial LDOS, ρ 0 = ω/ (4πc 2 ) for TM and ρ 0 = is the spectral density of optical states (DOS) and is the total energy of the mode at wavevector k. Hence, equation (1) describes an effective weighting of the spectral DOS with the average emitter-field overlap of the mode inside the waveguide core. In Supplementary Fig. 2, we compare data extracted from FDTD simulations with results obtained from equation (1). We find very good agreement for both the TM and TE modes of the stack waveguide. The Purcell factor for TM modes is about 8 times larger than for TE modes and peaks close to the SL frequencies at 193.8 THz. Due to the positive curvature of the TM 2 modal dispersion the enhancement falls off more slowly towards higher frequencies. The enhanced LDOS directly impacts on the properties of SL lasing, causing an acceleration of the light-matter interaction and a large β-factor as spontaneous emission is predominantly channeled into the SL mode. Confinement factor and the effective mode volume. Laser rate equations are a set of two coupled differential equations that approximate the dynamics of the photon and carrier number 2 . Applied to nanolasers 3,4 this simple model can reproduce the basic characteristics of a laser, such as its threshold behavior, transient dynamics, and modulation speeds. A comparison between the spatially resolved FDTD simulations and the rate equation model allows us to extract effective parameters of the system. We are particularly interested in the confinement factor and the effective mode volume of SL lasing. We first note that Poynting's theorem, which describes the evolution of the electromagnetic energy density U (r, t), can be transformed into a rate equation for the photon number S at the lasing frequency ω 0 by volume integration of the total energy density, S = ( ω 0 ) −1 V d 3 r U (r, t). This equation also defines the cavity loss rate γ c and the stimulated emission rate R stim (N ). Fast phase-oscillations of the fields E and H at the lasing frequency are eliminated through a time averaging over one period and the integration volume V is taken to encompass the full SL lasing structure making the photon number a slowly-varying, effective variable of the system. In dispersive media, the energy density is given by Here, ε(r, ω) is the spatially-resolved relative permittivity which follows a Drude dispersion in the metal layers and is equal to ε a in the active (gain) waveguide core layer (the prime denotes the real part of the complex quantity). It is important to account for the dispersive character of the permittivity to correctly describe the electric energy density U E , in particular inside metals. In time-domain, this dispersive character is expressed by the dynamic polarization response P f (r, t) of the free electron plasma in the metal and its in-phaseṖ f · E contribution to the electric energy (see Box 1 in Hess et al. 5 ). In Eq. (2), the cavity loss rate γ c includes both outcoupling of energy from the laser and dissipation of energy inside the laser. The former is connected to the closed contour integral over the Poynting flux, while the second is given by the work that the fields perform on the free electron plasma of the metal. The rate R stim (N ) arises from stimulated emission of photons in the gain medium with average carrier number N . When describing semiconductor (microcavity) lasers, and more recently plasmonic nanolasers 3,6,7 using rate analysis, one finds that a confinement factor Γ must be introduced in the rate equations due to the imperfect spatial overlap of the mode profile with the gain section. This confinement factor expresses the fact that the mode volume V eff , which connects the photon density s to the photon number S = V eff s, is distinct from the active (gain) volume V a , and it is defined as Γ ≡ V a /V eff 8 . The confinement factor Γ enters the rate of stimulated emission as does a group velocity v g (at this point the specific type of group velocity is not fixed), a factorization parameter ζ and the bulk gain coefficient g(N ) = g(∆N ) = σ a ∆N . The specific definition of the confinement factor and its physical interpretation then determines which group velocity must be used. We follow Chang and Chuang 8 and choose which identifies v g as the material group velocity of the gain material v g = v g,a = c/n g with n g = ∂(ωn a )/∂ω. The approximation on the rhs of Eq. (5) is valid for a weakly dispersive gain material with v g,a ≈ v ph,a = c/n a . The factorization parameter ζ measures the degree of inhomogeneity of the inversion profile (spatial hole-burning) and is defined as with the average inversion density ∆N (t) = V −1 a Va d 3 r ∆N (r, t). Equation (6) expresses a functional dependence of ζ(t) on the inversion and field intensity profiles and hence an implicit transient dynamics that stabilizes when the laser reaches steady state. In lasers that exhibit negligible spatial hole-burning effects, ζ(t) is close to unity and can be approximated by a constant factor ζ ≈ 1. A time-constant ζ smaller than 1 can also be assumed when the spatial distribution of the inversion and the mode profiles vary only little with time. In these cases, a linear relationship between the stimulated emission rate and the inversion density follows, R stim ∝ ∆N , a functional dependence that is commonly adopted in rate equation analyses 3 . For further comparison we calculate the stimulated emission rate R stim from the dynamic change of the total energy, which we are able to extract in FDTD simulations using a rate retrieval method based on Poynting's theorem 5,9 . From Eq. (4) and the knowledge of the confinement factor and average inversion, it is then possible to calculate the factorization parameter ζ. We also compare the confinement factor Γ of the rate equation analysis to the energy confinement Γ E = Va d 3 r U/ V d 3 r U , which is defined as the electromagnetic energy in the gain section divided by the total energy of the lasing mode. The two confinement factors will differ in strongly guiding or plasmonic systems because of the modal character of the fields, i.e. an unequal distribution of the electromagnetic energy into electric and magnetic components. As a side note, recent publications on plasmonic nanolasers have suggested the use of a confinement factor defined to incorporate the average energy (or waveguide group) velocity v E of the underlying waveguide system of the nanolaser 6,7 . This is possible because the stimulated emission rate R stim and the bulk gain coefficient g(N ) in Eq. (4) are invariant to the definition of the effective mode volume, while the confinement factor relates to the group velocity v g used. The distinction into confinement factor and group velocity can therefore be made in terms of waveguide properties, describing the amplification of a wave packet when it travels along the active waveguide with energy velocity v E . Comparing Γ in R stim (∆N ) = v E Γ ζg(N ) 6,7 to Γ in Eq. (5), we find the effective confinement factor Γ = Γv g,a /v E . The pre-factor v g,a /v E points towards effective confinement factors Γ that can become larger than unity in waveguiding systems with low energy velocity. This also implies a divergence of Γ as v E → 0, i.e., in the stopped-light regime. Clearly then, Γ is not suitable for the description of the stationary lasing mode in the SL laser, in particular considering that the stimulated emission rate into the mode, R stim , does not diverge. Steady-state analysis for varying gain section width. The steady-state properties of the SL laser are here analyzed in dependence of the gain section width. We particularly focus on the confinement factor Γ and the effective mode area A eff of the rate equation analysis as introduced above. The leaky TM 2 mode possesses two SL points at ω 1 /2π = 193.8 THz (λ 1 ≈ 1546.9 nm), k 1 = 0 µm −1 and ω 2 /2π = 193.78 THz (λ 2 ≈ 1547.06 nm), k 2 = 1.42 µm −1 . The gain parameters are listed in Supplementary Table 1. In Supplementary Fig. 3a, we observe that the average inversion ∆N increases sharply for smaller gain section widths w, while the effective modal gain in terms of the stimulated emission rate R stim increases only slightly from 3.66 ps −1 to 3.71 ps −1 . These latter values compare very well to mode solver calculations of the total modal loss: γ = 3.62 ps −1 at k = 0 µm −1 increasing to 3.88 ps −1 at k = 4 µm −1 . As the localized wave-packet is composed of a range of wavevectors between these limits, the effective loss of the lasing mode calculates as a weighted average of the k-dependent modal losses. Higher localization, as a result of a reduced gain section width, increases the weight of high k components, hence increasing the total loss and with it the stimulated emission rate. The sharp rise in ∆N for smaller w is linked to an equally sharp reduction in the confinement factor Γ of the lasing mode ( Supplementary Fig. 3b); we observe a decrease from above 70% to below 16%. Accordingly, the mode cannot localize fully over the gain section for small widths anymore. Alongside the confinement factor Γ we plot the energy confinement factor Γ E that has also been extracted numerically in steady state. The fact that Γ E does not differ much from Γ gives evidence of the photonic character of the TM 2 mode with its energy being almost equally distributed between electric and magnetic field components. For increasing width w of the gain section, both, Γ and Γ E asymptotically saturate. The respective values have been determined semianalytically from mode-solver calculations (plotted as dotted lines in Supplementary Fig. 3b) and are in excellent agreement with those obtained in steady state from the dynamic simulations. Supplementary Figure 3b also shows that the increase in Γ is to a certain degree mitigated by a decrease in the factorization parameter ζ, which falls from about 90% to just below 50%. The decrease of ζ at large widths is a manifestation of spatial hole burning. As the gain section widens, the SL mode suffers from an increasingly poor overlap with regions at the edge of the gain section where a high inversion builds up. Eventually the spatial hole burning becomes so strong that the SL pulse breaks up and dynamic mode competition sets in. We also note that the TM 2 mode profile has an anti-node in the center of the waveguide core where the inversion can not be depleted. Consequently ζ can not reach a value of 1, even for very small gain section widths. The effective mode area A eff = A g /Γ in dependence of the gain section width is displayed in Supplementary Fig. 3c. Despite the strong decrease in Γ the effective mode area A eff shrinks with decreasing w, a characteristic that is accounted for by the linear dependency A g ∝ w and the sub-linear decrease of Γ with w. For small gain section widths A eff eventually levels off. In this regime, where Γ ∝ w, the SL mode retains its field profile and features an almost constant amplitude across the gain section. The smallest measured mode area is obtained for a gain section width of w = 200 nm and has a value of A eff ≈ 0.14λ 2 . The results indicate that the subwavelength confinement of the lasing mode in the SL laser is ultimately determined by its dispersion. As the mode is compressed, it becomes (owing to group velocity dispersion) more lossy and, consequently, requires more gain. Once the inversion reaches 100% the losses cannot be compensated by gain anymore and the minimum mode volume is encountered. Finally, we note that the lasing frequency blue-shifts with decreasing width of the gain section ( Supplementary Fig. 3d). This is to be expected because the higher localization of the mode over smaller gain sections is based on the inclusion of increasingly larger wavevector components. The positive curvature of the band accordingly forces the lasing frequency to shift to slightly higher frequencies. Steady-state analysis of plasmonic mode. A further characterization of the confinement factor and mode area in dependence of the gain section width is shown in Supplementary Fig. 5. We find a confinement factor Γ that changes only little (from about 57% to 44%) when decreasing w from 1500 down to 200 nm. This is in stark contrast to the properties of the TM 2 -based lasing mode for which Γ was seen to decrease from about 70% to 16%. In addition, most of the electromagnetic energy of the TM 1 mode is electric in nature inside the waveguide section, i.e. Γ ≈ 2Γ E . This also leads to a high and fairly constant factorization parameter ζ ≈ 67%, as the mode profile does not feature field nodes along the y-direction. With the confinement factor remaining close to constant, the effective mode area A eff decreases almost linearly with the gain section width (Supplementary Fig. 5b). For this mode, the comparably larger dissipative loss prevents lasing oscillations for small w despite the large final confinement factor of Γ ≈ 44% and high ζ ≈ 68%, a result of the plasmonic character of the mode. A mode area of only A eff ≈ 7.6 · 10 −3 λ 2 is extracted for the smallest gain section width for which we observe SL lasing. The superior confinement of the TM 1 -based SL lasing mode and its much smaller effective mode volume make this structure an extremely interesting mode for lasing operation in the deep-subwavelength regime at potentially ultra-fast modulation speeds. Energy is emitted from this SL laser in terms of surface plasmons propagating in the plane of the waveguide core layer. This opens the possibility to use the SL laser as a source of surface plasmon polaritons or, equally, to achieve directed emission to free space through grating coupling. Impact of surface roughness on SL lasing. We analyze a stack structure without gain material where we inject wave packets at two distinct angles corresponding to wavevectors either side of the second SL point. From Supplementary Fig. 4 we find that, up to a critical rms roughness value, the energy velocity remains constant in time with a clear dependence on the injection angle. Hence, below this critical value, one can always find an optimum excitation angle for which the energy velocity along the waveguide direction is zero. It is apparent from the figure that the optimal angle depends on the level of surface roughness and additionally varies from sample to sample. At rms surface roughness of 3 nm, the nature of pulse propagation is changed dramatically as the energy velocity is not constant anymore and the pulse is equally likely to propagate in a forwards or backwards direction. In this regime, the propagation of energy is diffusive and correlates only weakly with the waveguide dispersion. As a result of the strong scattering at the surface inhomogeneities one observes a pulse breakup and the disappearance of the global SL point.
2016-05-04T20:20:58.661Z
2014-09-17T00:00:00.000
{ "year": 2014, "sha1": "bbae86d0500cbb31a4d16e34d9c26e5321758728", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms5972.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "de675e401e9c4fda10842328c78b4cd8f2e6b332", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
52068387
pes2o/s2orc
v3-fos-license
Phyllostachys edulis forest reduces atmospheric PM2.5 and PAHs on hazy days at suburban area This study is aim to illustrate Phyllostachys edulis’ role in affecting air quality under hazy day and solar day. P. edulis is a crucial plants growing well at suburban area at China Southern. In this manuscript, on 2 weather conditions (hazy day; solar day), changes in atmospheric particulate matter (PM), polycyclic aromatic hydrocarbons (PAHs), associated volatile organic compounds (VOCs), and PAHs in leaves and soils were measured, with PM-detection equipment and the GC-MC method, in a typical bamboo forest at suburban areas. The results showed that: (1) Bamboo forest decreased atmospheric PM2.5 and PM10 concentrations significantly by 20% and 15%, respectively, on the hazy day nightfall time, when they were times higher than that on any other time. Also, similar effects on atmospheric PAHs and VOCs were found. (2) Significant increases in PAHs of leaves and soil were found inside the forest on the hazy day. (3) Bamboo forest also reduced the atmospheric VOC concentrations, and changed the compounds of 10 VOCs present in the highest concentration list. Thus, bamboo forests strongly regulate atmospheric PM2.5 through capture or retention, for the changes in atmospheric VOCs and increase in PAHs of leaves and soil. With the development of the economy, people demand a better living quality. However, the air quality decreases and hazy weather occurs frequently to affect human. The hazy day affected more than 17 provinces with1.43 million km 2 and over 0.6 billion people at 2013 in China 1 . Particulate matter (PM), especially fine particulates with an aerodynamic diameter less than 2.5 mm (PM 2.5 ) and 10 mm (PM 10 ), is very important indicator for hazy day. With the high pollution air continuously diffused in China at the beginning of 2013, 5 strong haze pollutions occurred in Beijing-Tianjin-Hebei region. At the most serious time, PM2.5 broke through 600 μg/m 3 , PM1 broke through 300 μg/m 3 , and the concentration of organic matter, sulfate and nitrate in PM1 reached 160,70,40 μ/m 3 2 . PM 2.5 and PM 10 can adversely affect human health, resulting in premature mortality, pulmonary inflammation and accelerated atherosclerosis, among other conditions 3,4 . PM 2.5 can easily pass through the nose and mouth, then penetrate the lungs, and subsequently cause a range of effects on humans, such as impaired lung function and the loss of hemoglobin oxygen ability, eventually leading to respiratory and cardiovascular diseases [5][6][7] . In recent years, many studies indicated that trees can significantly reduce PM 2.5 and can absorb gaseous air contaminants [8][9][10][11] , especially at urban and suburban areas. Studies indicated that approximately 215000 t of total air PM 10 were removed by urban trees in the United States 12 , and an increase in tree cover from 3.7% to 16.5% removed approximately 200 tons of PM 10 each year in the West Midlands 13 . Forest canopies significantly altered the sulfur concentration and sedimentation rate of PM 2.5 in a coniferous forest in central Japan and in a Norway spruce forest 14,15 . The ability of trees to clean the air might be related to the following: an increase in vegetation cover, which reduces the sources of PM 2.5 ; PM can be absorbed by different tree organs; a decrease in wind speed may result in PM fallout; and changing wind direction might prevent PM 2.5 transport into certain areas [16][17][18][19] . Various factors, e.g., the concentration of atmospheric PM 2.5 and PM 10 , weather conditions, and tree biological characteristics, affect the ability of trees to remove PM 2.5 16,20 . Researchers have focused mainly on broad-leaved and coniferous trees, such as spruce, cypress, pine, gingko, and crepe myrtle [21][22][23] , whereas very few studies have been conducted on bamboo. In addition, research on the mechanisms of plant ecological responses to haze is insufficient. Results Changes in air quality on hazy and sunny days. Changes in atmospheric PM 2.5 and PM 10 concentration. The concentrations of PM 2.5 and PM 10 on the hazy day were significantly higher than those on the sunny day. On the hazy day, the concentrations of PM 2.5 and PM 10 ranged from 107.45-258.35 μg/m 3 and 161.83-387.73 μg/m 3 , respectively, compared to those on the sunny day, which were stable at approximately to 8 μg/m 3 and 40 μg/m 3 , respectively (Table 1). In addition, the ratio of PM 2.5 to PM 10 was significantly higher on the hazy day, i.e., over than 56% on hazy day, compared with 20% on the sunny day ( Table 1). The highest PM 2.5 and PM 10 were found at the nightfall time. The daily variations of PM 2.5 and PM 10 on the hazy day were greater than those on the sunny day and presented a trend of a slight increase followed by a sharp decrease and an increase to the highest concentrations at nightfall time, which reached 258.35 μg/m 3 and 387.73 μg/m 3 , respectively, outside the forest (Table 1). By contrast, the PM 2.5 and PM 10 concentrations were lower than 170 μg/m 3 and 275 μg/m 3 at any other time, both inside and outside forest ( Table 1). The bamboo forest had a significant effect on the PM. The concentrations of PM 2.5 and PM 10 decreased significantly about 20% and 15%, respectively, inside the forest at nightfall time. In the morning and at night fall, the PM 10 concentration in the interior of the forest was significantly lower than that outside the forest; no significant differences between the inside and outside of the forest were observed at any other time. At most times, the bamboo forest resulted in a decrease in the PM 2.5 (Table 1). Thus, it can be inferred from this research that bamboo forests can buffer changes in PM 2.5 and PM 10 during the day time. Changes in the atmospheric PAH content on hazy and sunny days. The total atmospheric content of the six main PAHs (T air ) on the hazy day was significantly higher than that on the sunny day. The main atmospheric PAHs were four-, five-and six-ring compounds (BahA, BaP, BbF, BkF, Icdp and BghiP), which are known to be principal components of PAHs. T air exceeded 12 ng·m −3 on the hazy day, compared with 1.04 ng·m −3 on the sunny day ( Table 2). In addition, the concentrations of the main PAHs, such as BbF, BkF, BaP, BahA, IcdP, and BghiP, were also higher than those on the sunny day ( Table 2). The bamboo forest had a similar effect on the atmospheric concentration of PAHs on the hazy and sunny days. On both days, T air inside the forest was significantly lower than that outside the forest. On the sunny day, the lowest T air measured inside the forest was 0.59 ng·m −3 . On the hazy day, T air inside the forest was 12.91 ng·m −3 , compared with 14.44 ng·m −3 outside the forest (Table 2). These results indicate that this bamboo forest had a positive effect in regulating atmospheric PAHs on the hazy day, resulting in improved air quality. Changes in the atmospheric concentration of VOCs on hazy and sunny days. On the hazy day, the atmospheric VOC content was significantly higher than that on the sunny day, and the 10 VOC compounds present in the highest concentrations differed significantly between the hazy and the sunny day. The atmospheric VOC content inside and outside the forest reached 94.77 and 156.85 µg/m 3 on the hazy day (Table 3), compared with only 62.53 and 76.40 µg/m 3 , respectively, on the sunny day (Table 4). More than 9 compounds, such as benzoic acid, acetone, and decanal, which were present in the list of 10 highest concentrations inside or outside the forest on the hazy day were not the same as those for the sunny day (Tables 3 and 4). This might be caused by the haze and was correlated with the increase in PM 2.5 in atmospheric. The bamboo forest resulted in a decrease in the VOC content on both hazy and sunny days. On the hazy day, the concentration of VOCs inside the forest was 39.58% lower than that outside the forest, and half of the VOCs present in the highest concentrations differed between the inside and outside of the forest (Fig. 1). On the sunny day, the concentration of VOCs inside the forest was 18.15% lower than that outside the forest, and most of the compounds present in the highest concentrations were the same inside and outside the forest (Fig. 2). This indicated that the bamboo forest played a positive role in regulating atmospheric VOCs. Changes in the PAHs concentrations in leaves on hazy and sunny days. The total concentrations of the six main PAHs in leaves (T leaf ) were significantly higher on the hazy day. T leaf inside the forest and at the forest edge was higher on the hazy than on the sunny day, by approximately 110% and 60%, respectively (Table 5). Further analysis indicated that the concentrations of most compounds (besides BkF and Bap) increased rapidly on the hazy compared with the sunny day. On both the hazy and sunny days, T leaf inside the forest was significantly higher than that at the edge of the forest and reached 182.35 and 86.99 μg/kg, respectively (Table 3). On the hazy day, T leaf inside the forest was 130% higher than that at the edge of the forest. Most of the compounds exhibited similar trends. It can be deduced that the bamboo forest had a positive effect in reducing atmospheric PAHs. On the hazy day, the increase in the PAH concentration of the leaves was correlated with the increase in outside forest atmospheric PAHs (Table 5), especially the leaves inside the bamboo forest; the PAH concentration decreased after a long time. In this study, the sunny day occurred later in the year than the hazy day, and the concentration of PAHs in leaves decreased, both inside and at the edge of the forest (Table 5). It can be inferred from this study that bamboo leaves can absorb some atmospheric PAHs, especially those inside in the forest. In addition, some of the PAHs absorbed by leaves may be transferred to other bamboo organs, water, or soil. Changes in the concentrations of PAHs in soil on hazy and sunny days. On the hazy day, the total concentrations of the six main PAHs (T soil ) in soil were significantly higher than those on the sunny day. T soil inside and at the edge of the forest on the hazy day was higher than that on the sunny day, by approximately 235% and 70%, respectively. In addition, the concentrations of all six compounds were higher on the hazy than on the sunny day (Table 6). The ratio of inside to outside (%) Table 3. The 10 VOCs present in the highest concentration on the hazy day. The data marked with * was not in the list of 10 VOCs present in the highest concentration. Inside, at the inside of P. edulis forest land; Outside, at the outside of P. edulis forest land; T VOCs , total concentrations of the 10 VOCs present in the highest concentration. No. VOCs Inside (μg/m 3 ) Outside (μg/m 3 ) The ratio of inside to outside (%) The bamboo forest also had an important effect on the PAH content of the soil on the hazy day. Specifically, the concentrations of all 6 PAH compounds in the soil inside the forest were higher than those at the edge of the forest on the hazy day; however, significant differences were only found for BkF, BaP and BghiP between the inside and edge of the forest. Discussion Although obvious bodily harm results from increasing PM 2.5 concentrations, it is difficult to completely eliminate the production of PM 2.5 from different sources in China due to the rapid rate of economic development of this country. Therefore, it is important that research is conducted on how to remove atmospheric PM 2.5 and lower the concentrations of other atmospheric pollutants. Many studies have been conducted on the potential of trees as a mitigation tool for atmospheric particles. Forests have a significant positive effect on the environment through a reduction in pollution, or they directly affect PM in the atmosphere by removing particles 13,48 . Trees have been shown to significantly reduce atmospheric PM 12,49 , and an increase in tree vegetation can also help to remove PM 13 . In this study, the atmospheric PM concentrations were significantly decreased by the P. edulis forest. This might be related to the fact that P. edulis is an evergreen species that can reproduce and propagate through rhizomes. Previous studies showed that evergreen species had a greater ability to reduce PM than deciduous trees 50 , because the leaves of evergreen species persist year-round, especially in winter and spring when hazy fog occurs frequently, resulting in a greater reduction in PM 17,51 . In the present study, the PM 10 concentration inside the forest was higher than that outside the forest at several times. This might be related to meteorological factors (e.g., temperature and humidity) in the early morning being different to those at other times, and this can affect the ability of trees to remove PM 18 . And trees may have a lower CO 2 assimilation rate, which can also affect PM absorption in a forest 51 . PAHs, a class of hydrocarbons that threaten human health, bind to PM 2.5 34, [52][53][54] . As a large fraction of the mass of PM, are candidate components with respect to PM toxicity. In this study, the atmosphere contained a large amount of carcinogenic and mutagenic PAHs on the hazy day; BbF, BkF and BaP (5-ring compounds), followed by those with 4 and 6 rings, i.e., BahA, BghiP, IcdP, BbkF (BbF and BkF), were dominant in all TSP samples, which is consistent with reports for GuiYu 54 and Hong Kong 34 . PAHs are removed by trees at the same time as PM 2.5 , and the decrease of PM 2.5 might be a result of plant capture and retention into the bamboo forest leaves and soil. On the one hand, plant decreased PM 2.5 concentration by capture PM in air. Previous documents showed that PM can be intercepted by plant organs, such as leaves, bark, and twigs, resulting in the removal of PM 2.5 . The intercepted particles can be absorbed into the tree, though most particles that are intercepted are retained on the plant surface. The ability to intercept PM 2.5 varies with tree species 48,55 , roughness of leaves 56,57 , cilia number on leaves 58,59 , pore structure 11 , wax coat 60 , and environmental conditions 16,49 . Studies have indicated that PAHs are absorbed together with PM 2.5 absorption 33,56,61 . This result is consistent with the findings of this study. In this study, atmospheric PAHs decreased and PAHs in bamboo leaves were significantly increased. The total concentrations of the six main PAHs in leaves (T leaf ) were significantly higher on the hazy day. On the other hand, retention by forests is an important way to reduce atmospheric PM 2.5 . Trees have the capacity to retain PM 21,62 . In this study, PAHs, an important component of PM 2.5 , accumulated significantly in soil on the hazy day. Researchers have reported that indicator matter in forest soils were significantly higher than in non-forest land 15,63 . Forests alter the sedimentation rate of PM 2.5 and increase the rate at which PM infiltrates the soil. According to radioactive substances or the indicator substance tracking technique, PM in forests was significantly higher than that in non-forest land 19,64 . Studies conducted in coniferous forests of central Japan and in Norway spruce forests also indicated that the forest canopy significantly altered the sulfur concentration and sedimentation rate of PM 2.5 14,19 . The effect of forest on air quality was more obvious under hazy weather condition. The effects of weather conditions on PM were very significant 18,65,66 .The wind speed affected the horizontal diffusion of aerosols. The temperature rise was conducive to aerosol diffusion and was also beneficial to secondary aerosol production. Humidity would cause ultra-fine aerosols to aggregate 65 . In the forest, the temperature increased, the vertical convection in the atmosphere increased, and the concentration of PM10 and PM2.5 in the forest belt would be reduced. The concentration of the relative humidity increased the concentrations of PM10 and PM2.5 66 . Under the foggy conditions, the general temperature was lower and the wind speed was smaller. The droplets that make up the fog were suspended in the atmosphere near the ground layer. It was very easy to absorb the polluted particles in the air, which affected the distribution of organic pollutants in the atmosphere. The daily average concentration of PAHs monomer was significantly higher than that of sunny days, and maintained high concentration throughout the day and night 34 . When the air quality in foggy weather was particularly poor, it was easy to form haze. The level of PAHs in leaves was promoted under hazy day 56 . And there were many problems were found in plant materials being contaminated by PAHs 67,68 . For the exposure to pollution, such as PAH pollutants, the leaves' surface and structure changed 69 , e.g PM was found in stomata 70 . And the leaves surface was more easily to bacterial and fungal infections. These changes caused high ability to retain water, and means that PAHs may affect the amount of retained rainfall indirectly 37 . Thus, the hazy day affected the plant materials, and plant leaves showed more ability to capture pollutants. These were in consisting to our study, that bamboo showed significant ability to remove PM2.5 and PAHs, and the PAHs in leaves increased significantly under hazy day. VOCs are strongly related to PM 2.5 because photochemical oxidation and ozonolysis of monoterpenes can lead to secondary organic aerosol (SOA) formation 71,72 . Photooxidation products of biogenic VOCs, mainly isoprene and monoterpenes, are significant sources of atmospheric PM in forested regions 42 . In this study, the concentrations of atmospheric VOCs were significantly different on the hazy and the sunny day, especially the 10 VOCs present in the highest concentrations. This might be because the air pollution had different sources of VOCs on the sunny and the hazy day. Changes in the sources of VOCs affect the components of VOCs 39 . This also indicated that the VOCs and PM 2.5 were important factors contributing to the hazy day. Furthermore, atmospheric VOCs can be significantly regulated by plants 40 . The changes in atmospheric VOCs in this study might be attributed to changes in the weather condition or because the bamboo forest affects the components of VOCs. Vegetation releases numerous VOCs into the atmosphere, particularly isoprene, monoterpenes, and sesquiterpenes, as well as a series of oxygen containing compounds 41 . In addition, isoprene can also result in SOA, including species such as 2-methyltetrols (2-methylthreitol and 2-methylerythritol), C 5 -alkene triols (cis-and trans-2-methyl-1,3,4-trihydroxy-1-butene and 3-methyl-2,3,4-trihydroxy-1-butene) and 2-methylglyceric acid 42,73 . Isoprene SOA products have been detected at various forested sites around the world 74,75 , which is similar to this study because several substances were also observed in the bamboo forest. In addition, atmospheric VOC distribution might be affected by changes in the environment, as well as the plant canopy, which can change the wind speed, temperature, and humidity, among other factors 76 . This indicated that the bamboo forest regulated the VOCs to adapt to the polluted environment. Materials and Methods Experimental design. A P. edulis forest in Changxing, Zhe Jiang, was selected as the investigation object. The study consisted of 2 weather types (sunny day, hazy day) and 3 sites (interior of the bamboo forest, the edge of the bamboo forest, and outside the bamboo forest). There were 3 bamboo forest land were selected as 3 replications. Atmospheric PM 2.5 , PM 10 , PAH and VOC concentrations were measured at 3 sites. The PAH concentrations in both bamboo leaves and soil were analyzed at 2 sites (interior and the outside of the bamboo forest). The hazy day treatment was selected at a time when the haze had persisted for more than one month and the PM 2.5 concentration exceeded 200 µg•m −3 . The sunny day was selected at a time when the weather was continuously fine for more than one week. The hazy day was considered as air pollution treatment, and the sunny day was considered as control treatment. This work is guided on "Observation Methodology for Long-term Forest Ecosy stem Research of National Standards of the People's Republic of China (GB/T 33027-2016). Sampling method. Method used for air sample collection. The method for air sample collection was based on the industrial or national standard that focused on the monitoring of air or particulate quantity 77,78 . Medium-flow air samplers (Wuhan Tianhong Instrument Limited Liability Company, Wuhan, China) were used to collect samples with a flow rate of 100 L/min for PAH analysis and a flow rate of 0.5 L/min for VOC analysis. The ambient air samples were collected from the atmosphere at heights of approximately 1.5 m above ground. Before sampling, filters were conditioned at 25 °C and 40% relative humidity in a desiccator for at least 24 h. PAH-associated contaminants were isolated from the atmosphere by drawing air through a Whatman quartz fiber filter (QFF, 800-1000). VOC-associated contaminants were isolated from the atmosphere by drawing air through a Whatman quartz fiber filter for approximately 30 min. Background contamination was monitored by using operational blanks, which were processed simultaneously with the samples. After sampling, the filters were wrapped in aluminum foil and stored in ziplock bags at −20 °C. Methods for leaf and soil sample collection. Soil and leaf samples were collected according national or industrial standards that are used to monitor the environment 79 the branches at about 3 m height in 4 directions, and 10 leaves were collected every direction. These were done 3 replications and samples gather into an ice bag. Then the samples were taken to the laboratory to processing. The surface soil samples (0-20 cm) were collected with quartering division method at every bamboo land. 3 replicate samples collected from one land were mixed uniformly, respectively. The soil samples were ground with a pestle and mortar, screened through an 80-mesh sieve, and stored in a mason jar for the determination of PAHs and VOCs. Methods for detection and analysis. Method for PM 2.5 and PM 10 concentration detection. PM 2.5 and PM 10 concentrations were detected using a dust detector (DUSMATE) according to the industrial standard 77,81 . The instrument was adjusted to the on-line monitoring system before detection. PM 2.5 and PM 10 concentrations were detected every 3 h at a height of 1.5 m during the monitoring period. Extraction and analysis of PAHs. PAH analysis was performed using the GC-MC method, and the analysis details were provided by the relative industrial standard for the quantification of air and particulate material 77 and elsewhere 54 . Briefly, the filter samples from ambient air, soil and bamboo leaves were repeatedly reflux extracted using a soxhlet extractor with aether:hexane (1:9) for at least 16 h, no less than 4 times per hour. Anhydrous sodium sulfate (15 g) was added to the extract to ensure free flow of sodium sulfate particles. The extracts were then concentrated to 5.0 ml using rotary evaporation. Subsequently, hexane (5-10 ml) was added and rotary evaporated until less than 1 ml hexane remained. To prevent interference, extracts were purified using silica gel chromatography. Extracts were analyzed for PAHs by gas chromatography mass spectrometry using Agilent's New 7890B Gas Chromatograph and a 5977 A Series Mass Selective Detector (GC-MSD) operated in the full ion scanning mode. Analysis of VOCs. VOCs were analyzed using the industrial standard methods for air and particulate monitoring 77,78 . The VOC concentration was measured using a thermal desorbed instrument (Tekmar 6000/6016) interfaced with a gas chromatograph (HP 7890 B) and mass selective detector (HP 5977 A, AMA Co., Germany). The working conditions of the TDS were as follows: gas pressure of 20 kPa, inlet temperature of 250 °C desorption temperature of 250 °C for 10 min; cold trap temperature held at 120 C for 3 min, followed by a rapid increase to 260 °C. An HP-5MS column (50 m, i.d. 0.25 mm, and film thickness 0.25 m) was used for chromatographic separation. The temperature program was 40 °C for 3 min followed by 10 °C/min up to 250 °C for 3 min and then an increase to 270 °C. The ion energy of the MS (type 5975 C, Agilent) was 70 eV; the ion source temperature was 230 °C; the Quadrupole temperature and the interface temperature were 150 and 280 °C, respectively; and the mass spectrometry scanning mass ranged from 28 to 450 m/z. The retrieval and qualitative analysis of the mass spectrometry data were accomplished using the NIST 2008 library, which was housed in the computer of the temperature instrument. In addition, the chromatographic peak area normalization method was used for the calculation of the relative concentrations of the mass spectrometry data. Statistical Analysis. Analysis of variance and Duncan's new multiple range tests were performed with SAS 9.2 Institute Inc. ™ (1999) software. The data are presented as means ± S.D. Differences at P < 0.05 were considered significant. Availability of materials and data. The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request. Conclusion Bamboo forest shows strong effects on reduction of the air pollution. The concentrations of atmospheric PM 2.5 , PM 10 , PAHs and VOCs decreased at inside of forest land. And by tracing the increase of PAHs in bamboo leaves and soils, the PM 2.5 might be cleaned by plant capture and bamboo forest renitent. These can illustrate that bamboo forest remove PM 2.5 by gathering PAHs into the forest ecosystem factors, e.g. bamboo leaves and soils. And bamboo forest also changed the VOCs concentration in air, and also changed the types of 10 VOCs present in the highest concentration list inside and outside forest land, to affect PM 2.5 . PAHs were important substances to explain the PM 2.5 regulate way. These findings illustrated a regulating way for P. edulis under haze day, and were help to understand its ecology value, especially the role of bamboo forest played in providing ecosystem services at urban or suburban. These also indicated that it is essential to start the research on the effect of P. edulis forest under different plantation types, different areas, and evenly the effect of other bamboo species. Also, these findings demonstrated that it was also essential to research the responses of bamboo to ecological factors, especially the polluted air, to understand the biological and physiological mechanism. And many physiological ecology study methods, such as isotopic tracing, confocal laser scanning microscope, manual simulation, et al., might help to do it well.
2018-08-23T13:45:33.336Z
2018-08-22T00:00:00.000
{ "year": 2018, "sha1": "1f2a0da51114a0c1659803fb2b271edab3331a99", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-30298-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f2a0da51114a0c1659803fb2b271edab3331a99", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
218898121
pes2o/s2orc
v3-fos-license
Role of fetal echocardiography in morphologic and functional assessment of fetal heart in diabetic mothers Diabetes mellitus (DM) is the commonest medical disorder faced during pregnancy and it includes type I, type II, and gestational diabetes. It may predispose to various complications including fetus malformation, macrosomia, spontaneous abortion, stillbirth, neonatal death, and intrauterine growth retardation. Hypertrophic cardiomyopathy (HCM) is one of the common anomalies depicted with diabetes. Fetal cardiac function analysis might provide important information on the hemodynamic status and cardiac adaptation to different perinatal complications. The mean septal thickness in the diabetic group was 0.7163 ± 0.1746 cm and 0.4989 ± 0.08068 cm in the control group. The mean myocardial thickness of the right ventricular free wall in the diabetic group was 0.6532 ± 0.13792 cm and 0.4874 ± 0.07482 cm in the control group. The mean myocardial thickness of the left ventricular free wall in the diabetic group was 0.6437 ± 0.13421 cm and 0.4737 ± 0.07573 cm in the control group. The mean value of myocardial performance index (Tie Index) in the diabetic group was 0.6232 ± 0.15606 and 0.4626 ± 0.04357 in the control group. From our study, we can conclude that prenatal complete echocardiographic study should be mandatory in fetuses of diabetic mothers due to high risk of congenital heart defects and onset of hypertrophic cardiomyopathy with fetal cardiac function impairment in the third trimester. Early diagnosis of congenital heart defects as well as evidence of hypertrophic cardiomyopathy and fetal cardiac function impairment that occurs in fetuses of maternal diabetes will definitely guide prompt postnatal therapy and care for those neonates. Background Diabetes mellitus is the commonest medical disorder faced during pregnancy and it includes type I, type II, and gestational diabetes. It may predispose to various complications including fetus malformation, macrosomia, spontaneous abortion, stillbirth, neonatal death, and intrauterine growth retardation [1]. Risk of congenital anomalies is increased in diabetic mothers' infants with estimation to be between 2.5% and 12% with over representation of congenital heart defects [2]. Hypertrophic cardiomyopathy (HCM) is one of the common congenital anomalies depicted with diabetes mellitus (DM), thus requires high index of suspicion as the specific management may vary; for example, inotropic agents or digoxin which may be used in heart failure associated with structural heart defects are contraindicated if hypertrophic cardiomyopathy is present [3]. The ventricular interventricular septum is preferentially affected, but both right and left ventricular free walls may also be involved, predominantly the left. Manifestations of myocardial hypertrophy are often subtle; however, the hypertrophy can be detected by standard fetal echocardiography, usually by comparing septal thickness with control cases [4]. In diabetic mothers' infants, it is suggested that HCM arise from the effects of excess in insulin level. However the mechanisms by which insulin causes ventricular hypertrophy has not yet been explained, but the heart is a main target for insulin, and expression of functional insulin receptors by the cardiomyocyte is comparable with that of other insulin sensitive cells [5]. Thus, it is suggested that increase in fetal insulin level can trigger hyperplasia and hypertrophy of myocardial cells. This hypertrophy mainly affects the interventricular septum and can occur despite tight glycemic control [6]. Gestational diabetes mellitus (GDM) is defined as carbohydrate intolerance recognized for the first time during pregnancy and usually resolves after delivery. The outcome of gestational diabetes is good especially with controlled blood glucose levels. However, GDM increases the risk of a number of fetal adverse outcomes. Fetuses of diabetic mothers are prone to fetal hyperglycemia and hyperinsulinism secondary to maternal hyperglycemia [7]. Despite the recognition of fetal myocardial hypertrophy, there is still controversy about its effect on global cardiac function [6]. However, there are reported cases of severe perinatal cardiac dysfunction and fetal deaths [8]. Fetal cardiac function analysis may provide an important information on the hemodynamic status and on the cardiovascular adaptation for different perinatal adverse effects [9]. A Doppler-derived index of the right and left ventricular myocardial performance combining systolic and diastolic time intervals was described in the literature by Tei et al. 1995 [10] .The Tei Index (TI) has been described to be a non-invasive , useful, Doppler-derived myocardial performance index that acts as a combined index of global myocardial function. By integrating only time intervals, the index is less dependent on precise imaging or anatomy. Moreover, the TI is independent of both ventricular geometry and heart rate. The TI is defined as the summation of the iso-volumic contraction time (ICT) and the iso-volumic relaxation time (IRT) divided by the ejection time (ET) [11] . Methods The objective of this study is to assess the impact of maternal pre-gestational diabetes whether type I or type II and gestational one on fetal cardiac morphology and function. This study is approved by our institutional review board. Patients A prospective study included 40 metabolically controlled diabetic pregnant ladies and 60 normal non-diabetic pregnant ones (control group) between 28 and 40 weeks gestation dated by LMP. Full history was collected from all ladies including age, parity, and history of drug intake or any associated medical disorders. Fasting blood sugar analysis results are obtained from all cases to differentiate between diabetic and control cases. Fetal biometry was performed for all cases, including measurements of bi-parietal and occipito-frontal diameters, head circumference, abdominal circumference and femur length. Doppler examination of umbilical , middle cerebral arteries and ductus venosus was also done. Exclusion criteria Cases with any other associated medical disorders were excluded from the study. Cases of structural abnormalities including different fetal systems apart from fetal heart as well as cases of fetal growth restriction (IUGR) were excluded from our study. Cases of severe polyhydramnios or multiple gestations were excluded from the study. Imaging protocol A complete standardized fetal echocardiogram was performed for all diabetic pregnant ladies for full structural assessment. The control group underwent just basic and extended basic fetal cardiac examination according to (ISUOG guidelines, 2013) [12]. Measurement of the end diastolic IVS (interventricular septal thickness) and myocardial free walls in lateral sub-costal view in some cases and apical or basal four chamber view in other cases (depending on fetal position at the time of scan) just inferior to atrio-ventricular valves were performed for all cases. Doppler waveform for the left Mod-MPI (modified myocardial performance index) according to Hernandez et al. (2012) [9] was performed. The pulsed Doppler sample volume was placed on inner wall of the ventricular septum above the mitral valve and below the aortic valve in the four chamber view with a basal or an apical projection allowing simultaneous inflow and outflow display from the left ventricle. Then, time periods for the left Mod-MPI were measured: ICT was measured from the mitral valve closure click to the opening of the aortic valve click. IRT was measured from the aortic valve closure click to the opening of the mitral valve click. ET was measured from the opening to the closure of the aortic valve click. Statistical analysis The statistical analysis was carried out using SPSS software package version 17.0 (SPSS Inc., Chicago, IL, USA). Obstetric characteristics were presented as mean + standard deviation. The Tie Index, RV wall thickness, LV wall thickness and IVS thickness obtained from the fetuses were plotted against gestational age and the correlation coefficients were determined by using Pearson's correlation. The normal values of these variables were presented as 5th, 50th, and 95th percentile ranks. P value < 0.05 was considered statistically significant. Normograms and percentile fitted curves were obtained. Results From 40 diabetic cases examined in our study, 38 was applicable to our study regarding functional assessment using tie index and assessment of septal and myocardial free wall hypertrophy. One of our cases appeared to have a structural heart defect known as visceral heterotaxy syndrome (left isomerism) and the other was diagnosed to have tetralogy of Fallot. From our statistical results, we found the following: We found that the mean interventricular septal thickness (IVST) in the diabetics patients group was 0.7163 ± 0.1746 cm and 0.4989 ± 0.08068 cm in the control group (Fig. 1). A significant statistical difference was found regarding the septal thickness between diabetics group and normal group (P < 0.05). We found that the mean myocardial thickness of the right ventricular free wall (RVWT) in the diabetic group was 0.6532 ± 0.13792 cm and 0.4874 ± 0.07482 cm in the control group (Fig. 2). A significant statistical difference was found regarding the right ventricular free wall myocardial thickness between diabetic group and control group (P < 0.05). We found that the mean myocardial thickness of the left ventricular free wall (LVWT) in the diabetics group was 0.6437 ± 0.13421 cm and 0.4737 ± 0.07573 cm in the control group (Fig. 3). A significant statistical difference was found regarding the left ventricular free wall myocardial thickness between diabetics group and control group (P < 0.05). We found that the mean myocardial performance index (Tei Index) value in the diabetics group was 0.6232 ± 0.15606 and 0.4626 ± 0.04357 in the control group (Fig. 4). A significant statistical difference was found regarding the myocardial performance index between diabetic group and control group (P < 0.05). The estimated fetal weight and feto-placental Doppler study did not significantly differ between the diabetics and the control groups. Discussion We aimed in our study to assess the effect of maternal pre-gestational and gestational diabetes on fetal cardiac morphology and function for which complete fetal echocardiogram was performed in every case to rule out any structural defects. Measurement of ventricular myocardial free walls and interventricular septal thickness was done as a tool for evaluation of cardiac hypertrophic cardiomyopathy that occurs in fetuses and neonates of diabetic mothers. Dopplerderived modified myocardial performance index (Mod-MPI) was used to assess global overall systolic and diastolic function to display whether impairment of fetal cardiac function occurred or not. The study included 40 diabetic pregnant women and 60 control cases. Both groups were within a comparable gestational age (between 28 and 40 weeks gestation). The mean septal thickness in the diabetics group was 0.7163 ± 0.17 cm (Figs. 6 and 7) and 0.4989 ± 0.08 cm (Fig. 5) in the normal group. A significant difference was found regarding the septal thickness between diabetic group and normal group (P < 0.05). These results are in concordance with results of previous studies. [6] reported that the septal thickness was increased in fetuses of diabetic mothers compared with the control group with statistical significant difference (P < 0.001) despite being comparative study between cases of controlled gestational diabetes (cases of increased HbA1c values above 6.5% were excluded) and normal ones. It was found that 66.6% (16/ 24) of the diabetic pregnancies were above the 95th percentile for the IVS thickness in the control group, which was estimated at 3.51 mm; however, measurements of IVS thickness in both groups were in normal ranges in this study. Prefumo et al. (2005) [8] reported cases of marked fetal myocardial hypertrophy associated with signs of myocardial insufficiency in fetuses of diabetic mothers. Balli et al. (2014) [14] showed that the mean septal thickness at 36 weeks gestation was 0.452 ± 0.49 cm in group of maternal diabetes compared to 0.38 ± 1.77 cm in the control group with a significant statistical difference (P < 0.001). In this study, despite being statistically different, no pathological IVS hypertrophy was found. However, evidence of diastolic dysfunction in the study group was found by application of different parameters for assessing diastolic dysfunction. This study has shown that maternal diabetes was associated with a significantly increased thickness of all cardiac walls (Figs. 6 and 7) compared with normal pregnancies (Fig. 5), confirming in part the results of previous studies done by Jaeggi et al. 2001 [15]. Penney et al. (2003) [16] focused on cases with pre-existing diabetes, but Zielinsky, 2009 [17] applied his study on population with gestational diabetes mellitus. Both of them reported significant difference in myocardial-free walls and interventricular septal thickness between both studies and control groups. In a series of neonates and infants, the cardiomyopathy (CM) was noticed in about 2-7%, but probably during the fetal life the prevalence is higher reaching 6-11%. The high intrauterine loss, occurring in almost one third of affected fetuses, most likely explains these differences [18]. Still, authors can find some cases with septal hypertrophy, which are not symptomatic postnatally. Although most symptoms of cardiomyopathy may spontaneously regress within few weeks, sometimes, overt congestive heart failure develops, tachycardia, tachypnea, gallop rhythm, and hepatomegaly [19]. In our study, the mean septal and myocardial free wall thickness was statistically significant compared to the control group with above 95 th percentile mean value of tie index denoting evidence of hypertrophic cardiomyopathy with impairment of left ventricular function. It was not possible in this study, to expect, whether theses fetuses would be symptomatic or not by the ultrasound and Doppler criteria. Postnatal evaluation by a specialized neonatologist and a cardiologist is recommended. Some studies suggest that gestational diabetes can be a risk factor for congenital heart defects, but there are still controversies about the extent of such association [20]. In our study, the incidence of congenital heart defects among study cases represents 5% of the total number of cases with diabetes. Previous studies showed an 8.5% incidence of cardiac malformations in fetuses of diabetic pregnancies [21]. Our study was restricted to the limited patients' number involved in our study; however, if a large number of diabetic patients could be included, incidence might become different. In our study, we also aimed to assess the effect of maternal diabetes on fetal myocardial function using the Modified myocardial performance index (MPI). MPI is a global indicator of cardiac function. Increased in MPI indicates globally impaired ventricular function in maternal diabetes during the intermediate and late pregnancy periods [22]. Hernandez-Andrade et al.(2012) [9] defined age-adjusted reference values for the left ventricular Mod-MPI in normal fetuses at 19-39 weeks gestation and calculated the 5th, 50th, and 95th percentiles for the Mod-MPI as well as its components (ICT, ET, and IRT). It was found that in normal fetuses, the Mod-MPI did not exceed 0.43 at any gestation (at a fetal heart rate of 140 bpm: median 0.45 range 0.36-0.54). Another more recent study revealed the normal value of MPI is about 0.51 ± 0.12 [23]. These values are in close agreement with our results for the control group, but are much lower than those in the study group indicating reduced left ventricular wall compliance in fetuses of diabetic mothers with subsequent impairment of fetal overall cardiac function. Our study showed a statistically significant (P < 0.001) higher overall Mod-MPI in the study group as a whole (median 0.61, range 0.39-0.83) compared to the control group (median 0.47, range 0.32-0.62). Conclusion From our study, we can conclude that prenatal complete echocardiographic study should be mandatory due to high risk of congenital heart defects and onset of hypertrophic cardiomyopathy with impairment of fetal cardiac function in fetuses of diabetic mothers in the third trimester. Early diagnosis of congenital heart defects, impairment of fetal cardiac function as well as evidence of hypertrophic cardiomyopathy that occurs in fetuses of maternal diabetes will definitely guide prompt postnatal therapy and care for those neonates.
2020-05-27T15:00:27.909Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "b5638cd0537307b8505192e1491c831465ea1311", "oa_license": "CCBY", "oa_url": "https://ejrnm.springeropen.com/track/pdf/10.1186/s43055-020-00207-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b5638cd0537307b8505192e1491c831465ea1311", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
94349412
pes2o/s2orc
v3-fos-license
Quantum Criticality and Superconductivity in SmFe1−xCoxAsO One of the iron pnictide superconductors, SmFe1−xCoxAsO shows a domelike TC curve against Co concentration x. The parent compound SmFeAsO shows the crystal structure transition and antiferromagnetic spin density wave (SDW) ordering. With increasing x, the structural transition temperature TD and SDW ordering temperature TN decrease and reach 0 K at the critical concentration xC. It is not so clear that the critical concentrations for TD and for TN coincident to each other or not. In our present report, we investigated the structural transition by the low temperature x-ray diffraction and the SDW ordering and the superconducting transition by measuring the magnetization using the SQUID magnetometer, MPMS We determined the phase diagram of TD, TN and the superconductive transition temperature TC against the Co concentration x near the xC precisely. We found that the maximum of TC in domelike shape locates near the xC, suggesting the QCP. Introduction The mechanism of high-temperature superconductivity in cuprates has been discussed for many years. Considering the non-Fermi-liquid behavior in the normal state and a domelike superconducting transition temperature T C curve plotted against hole-doping concentration, the importance of the spin fluctuations around the SDW quantum critical point (QCP) has been pointed out [1]. In the FeAsbased parent compounds, there is a structural phase transition in the temperature range of 100-200 K and the SDW ordering [2]. Various chemical-doping approaches can suppress the structural phase transition and SDW ordering, and superconductivity consequently appears at T C . Domelike T C (x) curves are widely observed in these iron-based pnictides, i.e., Co-doped SmFe 1-x Co x AsO system [3]. With increasing Co concentration, the structural phase transition temperature T D and SDW ordering temperature T N decrease and reach 0 K at the critical concentration x C . The maximum of T C occurs near x C , suggesting the QCP. In our present paper, we report the investigation of the structural phase transition and SDW ordering in Co-doped SmFe 1-x Co x AsO system, very precisely near QCP, observed by low-temperature x-ray diffraction (XRD), and by measuring the magnetization, respectively. Phase diagram The sample preparation method is the same as described in our previous papers [2]. The sample purity was first verified by powder XRD measurements at room temperature using D/Max-rA diffractometer with Cu K  radiation and a graphite monochromator. The low-temperature XRD was measured down to about 1.5 K. The low-temperature XRD measuring method was also described in our previous paper [2]. measured with increasing temperature in a magnetic field of 10 Oe after zero-field cooling to 1.8 K. In both magnetization, the Meissner effect was observed and the diamagnetism measured during a magnetic-field cooling is smaller than that measured after zero-field cooling, as expected. As shown in Fig.1a, an abrupt decrease in the Meissner diamagnetism was observed between 3 and 4 K, whereas a sharp jump of magnetization at 5 K was observed in Fig.1b. These results can be explained as follows. When a superconducting compound is cooled in a magnetic field, the magnetic field can penetrate into the crystal to some extent. Then the small magnetization change due to the SDW transition at 3-4 K can be observed. However, after zero-field cooling, the applied small magnetic field of 10 Once we obtained the fitting parameter, using the same profile function, we attempted to fit the spectrum by assuming an orthorhombic structure at 7 K and 4 K. the resultant lattice constants are obtained as follows in unit of angstrom. At 7K: a = 3.93891, b = 3.93894, c = 8.3848. At 4 K: a = 3.93815, b = 3.93986, c = 8.44378. It is found that at 4 K the crystal distortion occurred. In the SmFe 1- x Co x AsO system, the phase diagram shown in Fig. 2 was obtained by using the results of magnetization measurements for x = 0.07, 0.075, 0.08 and also by using the results of XRD and the resistivity measurements shown in Ref. [3] for x = 0, 0.01, 0.025 and 0.05. In Fig. 2, the temperature axis is in logarithmic scale. For x = 0.08 and 0.1, the structure change cannot be observed down to 1.8 K. From the phase diagram shown in Fig. 2, there is a possibility that the QCP of structure phase transition locates at the different concentration from the QCP of SDW transition. Both phase transitions, that is, the structural and magnetic phase transitions will have the QCPs. Then the critical concentration, we can expect the quantum critical fluctuation which will give the decrease of the I.I. at very low temperatures. Fig. 3 also supports the crystal distortion at around 5 K. It will be very interesting to measure the I.I. for real critical concentration of crystal phase transition, that can be x = 0.12~0.15. We are going to do these experiments.
2019-04-04T13:05:14.255Z
2012-12-17T00:00:00.000
{ "year": 2012, "sha1": "46bd6f2c653a082e001787fff27c6eb5f3b5a5f4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/400/2/022047", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5196441c0bc1030e84cfe5505d461b54a293f71f", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
237703668
pes2o/s2orc
v3-fos-license
Deposition of TiO 2 thin Films by Dip-Coating Technique from a Two-Phase Solution Method and Application to Photocatalysis A derivation of the sol-gel-dip-coating deposition technique is proposed, where the precursor solution exhibits two separated liquid phases, reaching an equilibrium state in a heterogeneous solution. Structural, optical and photocatalytic properties of TiO 2 films grown from the proposed two phases method are shown and discussed, along with the properties of films deposited when the top phase present distinct lengths, which are observed through SEM images and optical transmittance spectra. The dominant crystalline phase is anatase for all the films prepared. Films are tested towards their efficiency for photocatalysis, using methylene blue as dye degradation agent. It has been found that films deposited through the two phase method are more efficient on the photocatalyitic degradation of methylene blue. Introduction Titanium dioxide (TiO 2 ) is a semiconductor oxide that has drawn attention due to its application in the creation of several sorts of devices.It is worth mentioning its role as photocatalyst for removal of air and water pollutants 1 , but its applications have been studied in a variety of fields and in a multitude of sample forms.It may be used s gas-sensors 2,3 being doped with different elements [4][5][6] in solar cells 5,7 in prosthetics 8 besides the photocatalyst function itself 1,9,10 .To accomplish the goals of such a variety of applications, the knowledge on the properties and characteristics of this material are fundamental, and TiO 2 has a vast and well established literature available.TiO 2 has a wide bandgap between 3.0 and 3.8 eV depending on the crystal structure 11,12 .Anatase, crystal structure acquired most commonly through heat treatments below 700 °C13 , has an in direct bandgap of about 3.25 eV 14 .Rutile, the most thermodynamically stable structure 15 , has a direct bandgap of around 3.0eV 14 .Other structures such as brookite, TiO 2 -B, baddeleyite and columbite are less focused on studies, either by a lack of known interesting properties or difficulty in their respective preparation 16,17 . The TiO 2 structure may be controlled as well as the crystallite average size, either by the thermal annealing temperature 18 or through the precursor solution's pH 19 .Among the possible applications of TiO 2 , the research to apply its properties as a photocatalyst has been growing side by side with the increasing worry about water pollution and climate change 20,21 .The photocatalytic property of TiO 2 has been applied in a variety of ways to breakdown either CO 2 molecules 22 and complex carbon molecules in water [23][24][25] , to increase potable water yield 26 , or to break water molecules, producing hydrogen as a renewable energy source 27 , being most commonly used in its anatase phase.Lowering the bandgap through doping 28 , increasing surface area and reactivity through different deposition and growth methods 29 are some ways that have shown to increase TiO 2 photocatalytic efficiency. Concerning the deposition of thin films, they have a variety of applications, in different areas of scientific and technological knowledge.Several methods for thin film preparation have been designed and used to achieve this diversity of applications, such as: epitaxial growth, resistive evaporation, chemical vapor deposition (CVD), atomic layer deposition (ALD), sol-gel methods (dip-coating, spin-coating and doctor blading) and Langmuir-Blodgett, among others. From these techniques, sol-gel presents simplicity in the production of semiconductor oxide films at different size scales that other methods lack.Among the methods using sol-gel precursor solution, dip-coating is a well known method due to its cost and efficiency 29 .This deposition process may be separated into three stages: 1. immersion, where a substrate is immersed into a precursor solution at a constant rate, followed by a time interval in which the substrate remains submerged, interacting with the precursor solution; 2. deposition and drainage: when dragging the substrate out the container holding the precursor solution, at constant rate, part of the fluid that adheres to the substrate surface overcomes the surface tension of the solution and another part is drained back into the container, forming a thin layer of gel on the substrate surface; 3. evaporation: the solvent present in the deposited thin layer, adhered to the substrate, immediately begins to evaporate when exposed to air, leading to densification of the adhered material. The rate of evaporation depends on atmospheric properties such as temperature, pressure and composition, and affects the resulting film structure and composition 30 .With a higher rate of evaporation, the densification, that starts with the aggregation of particles in the sol-gel and eventual gelation of those aggregates, happens faster, resulting in a more fragile and porous structure 13,31 .Heating the solution and consequently the film may result in further densification, but with a rapid loss of solvent the mobility of particles and interactions between the deposited solution and the substrate reduce drastically, possibly reducing the bonding strength between the aggregates and the substrate 13 .As a counterpoint, heating the substrate may have an opposite effect.Increasing the energy available to particles may increase the average length that the particles travel on the substrate surface, eventually bonding with deeper sites, creating stronger bonds.Santos et al. 32 found that substrates kept at higher temperature leads to higher optical transmittance in the near-infrared region, associated with a lower concentration of free electrons. The most complex aspect, then, in the deposition process is the stability and composition of the precursor solution: initial reagents must be soluble in water or react and form products that are soluble in water.All molecules present in the solution, ligands, additives and surfactants must decompose during thermal annealing, and to maintain stability in the solution and allow deposition by wetting, counter ions and ligands must not deteriorate in order to control the wetting properties.The interaction of fluids and solid components makes the method interesting and complex from a theoretical point of view, and establishing direct relationships between the properties of precursor solutions and the properties of the final product is not a straightforward task. In this work, aiming to increase the number of available parameters to better control and further develop the dip-coating method, the precursor solutions are prepared to reach equilibrium in the separation between two liquid phases.The proposed deposition system is outlined in Figure 1 and compared with the generally used method.Two-phase systems for dip-coating deposition process appear in the literature with Ceratti and coworkers 33 where the heterogeneous is investigated as a more efficient way of obtaining films, requiring a smaller volume of the precursor solution in the dip-coating process.Ceratti's method uses heavy liquids, such as perfluorodecaline, gallium or mercury, to maintain a thin liquid layer of the depositing precursor solution near the top of the container, forming the first phase with which the substrate interacts during deposition, allowing it to be fully wetted by the solution, without having to completely fill the container with the precursor solution.The method proposed here works in a reversed way: a less dense phase, without the presence of the material to be deposited (region A in Figure 1) floats on top of the precursor solution containing the material (region B in Figure 1) to be deposited.While the sol-gel solution is divided between A and B, film samples can be divided in 2 regions as well, denoted as I and II.While region I interacts only with the top phase (A), region II of film samples have interacted with both phases (A and B) during deposition. It is possible to establish similarities of the present proposal with the Langmuir-Blodgett method 31 and with the method proposed by Ceratti 33 in the sense that, by changing the disposition of the material dispersed in the aqueous medium, it is possible to modify the interaction mechanisms between the particles and the medium, creating new structures and morphological properties in the deposited film.Besides, it has been shown that it can be more efficient than the conventional method, as proposed in the case of Ceratti's procedure.Similar methods to the proposed here are more commonly found for film formation from polymers [34][35][36] .As the methods are different, it is possible to compare the goals of this paper with Ceratti et al.'s 33 .In this paper the bi-phasic system is created to become a new parameter to be studied and to see its effects on the deposited films, and the "main" phase is the heavier, bottom one, different from the referenced work, where the material to be deposited is the higher one, which floats on top of a heavier, mostly inert liquid.Their goal was to validate their method as a way to reproduce films while being more efficient with solutions. One of the goals of the presently proposed method, to more easily investigate its effects on the produced films, is to reduce and eliminate the concentration of TiO 2 in the upper phase (A).The development of two-phase systems for film deposition provides an opportunity to deal with disadvantages of the sol-gel-dip-coating method such as the stability of the medium, easily oxidized or dried, the difficulty in preparation of solutions or their viability to be used in larger scales.Thickness control can be accomplished through the deposition rate, but the range of possible thicknesses can be narrow depending on the sol-gel solution used, with thinner films requiring a slower deposition rate.Eventually thermal annealing or rapid evaporation of solvents can crack film's structures.The proposed method could eventually complement the process with advantages, such as: better control of thickness and homogeneity, isolation of the precursor solution from air for facilitated storage, introduction of immiscible components, greater efficiency and possible new doping methods. In this paper we use the two-phase sol-gel dip coating deposition to verify the optical and morphologic properties of deposited films with distinct heights of the top layer (A in Figure 1), and the efficiency of this process applied to photocatalysis, using methylene blue as degradation dye.Alongside the resulting effects achieved by the method in photocatalysis is the reduction of material deposited, thinner films without slowing down deposition rate, as well as increasing surface area with less material.The change in surface tension dynamics also allows for the change in draining regime without change in deposition rate, which lead to the formation of structures found mostly in samples under the evaporation regime at high deposition rate (10 cm/min).Lastly, the increase in photocatalytic efficiency when applying the method is tied to the increase in surface area, which, with a smaller volume of material deposited and thinner film is an interesting achievement. Preparation of TiO 2 colloidal suspensions The proposed method for preparing precursor solutions was developed through derivation of the method used by Hanaor et al. 37 and Trino et al. 38 .The amounts needed for the preparation of 50 mL of the precursor solution is as follows: 185.0 mL of deionized water, 56.7 mL of isopropanol (CH 3 CHOHCH 3 ), 2.6 mL of Nitric acid (HNO 3 ), and 15 mL of titanium isopropoxide (TTIP).In a beaker, deionized water and isopropanol are mixed, and the content is magnetically stirred while the acid is added slowly.Then, TTIP is added slowly, dropwise and the mixture is stirred for 30 min.After 30 min, the beaker is covered with aluminum foil with small holes to reduce the evaporation rate and start the peptization process by heating the solution to 85 °C, until the volume is reduced to 50 mL.To prepare the proposed two-phase system, however, a slight modification in this procedure was necessary, with the final volume reduction done at 120 °C until the volume reaches 15 mL.To this final solution, 5 mL of deionized water are added and the whole solution is stirred, capped and set aside to reach the equilibrium in phase separation.Thereafter, the solution is manipulated to control the dimension of the second phase using pipettes.The higher final heating temperature, as well as the volume reduction to 15 mL, mainly reduces the volume of isopropyl alcohol present in the solution as a co-solvent inside the semi-capped beaker (boiling point: 82.5 °C).With a lower concentration of alcohol, the added volume of deionized water does not mix with the solution, thus tending, after keeping the system at rest for a while, to separate in the desired two phases, for subsequent film deposition. Thin film deposition Soda-lime glass substrates are cleaned and dried, prior to use, and left in a 9:1 solution of deionized water and neutral detergent (Extran) for 24 h.Then, they are washed in deionized water for about 5 min and quickly immersed in isopropanol, being then, dried with a thermal blower.For film deposition, the substrates are attached to a substrate holder (Syringe Pump model MQBSG 1/302 connected to a controller model MQCTL 2000 MP, both from Microchemistry) that controls the dipping rate.The precursor solution is placed under the substrate in a beaker, and the substrate is then dipped at a fixed rate (10 cm/min) and removed at the same speed.After aTiO 2 layer is deposited and the substrate completely removed from the solution, it is left to drain and dry for about 10 min, draining any volume of solvent in excess back to the beaker.Intermediate heat treatments are carried out between layer depositions, with the substrate being placed on a ceramic base with metal supports and introduced into an oven for 10 minutes, which is preheated and stabilized at 150 °C.At the end of the heat treatment period, the film is set aside for some time to cool down to room temperature and installed again in the substrate holder for deposition of a new layer.When the desired number of layers is reached, the sample is taken again into the oven, however, the thermal annealing process starts at room temperature and rises at a constant rate of 3°C/min, up to a target temperature of 500 °C, which is stabilized and maintained for two hours.Table 1 gives a list of prepared films and the characteristics of the two phase solution. The chosen annealing temperature was such that the main structure formed was anatase, for its reported qualities in photocatalysis.Although the temperature interval to form anatase ranges from 300 to 700 °C13 , at 500 °C the available energy is enough to nucleate some crystallites and achieve some degree of crystallinity while not enough to start the transition to rutile. Characterization X-ray diffraction profiles were obtained on a Rigaku Miniflex 600 equipment, using incident Cu Kα radiation (1.54056 Å), power of 40 KV and 15 mA of current and scanning rate of 10°/min.These data were used to identify the main phase formed and evaluate the average crystallite size. Optical characteristics of the films were obtained through optical absorption from the ultraviolet to the near infrared (250 -1800 nm) in a Lambda 1050 UV/VIS/NIR Perkin Elmer spectrophotometer.Through this characterization it was possible to confirm reliably the phases found, as well as to analyze the stages of the photocatalysis process. Scanning electron microscopy (SEM) measurements were performed using a Carl Zeiss scanning electron microscope, model LS15.For photocatalysis experiments, methylene blue (MB), a dye with a well-known absorption spectrum and commonly used for degradation efficiency studies [23][24][25] was used.In this work, this dye is diluted in deionized water, to create an aqueous solution of concentration of 2 x 10 -3 g/L of methylene blue, or around 6 x 10 -6 mol/L. The TiO 2 films to be studied are, then, submerged in this solution, and are given 2 hours to adsorb particles in its surface while in darkness.At the end of these 2 hours, samples from the diluted dye are collected and an absorbance profile around 670 nm is measured for each (the maximum absorption of methylene blue is found at this wavelength 39 ).After this first step, the submerged films are irradiated for 180 min with ultraviolet light from an Osram mercury lamp (11W), with emission peak at 254 nm.More samples from the diluted dye are collected at 90 min and 180 min. To standardize and allow comparison among the studied samples and with existing literature the degradation efficiency (η) was calculated following Equation 1. With d the percentage of dye degraded between the initial absorbance measurement (Ao) and the absorbance acquired in the respective step (At).The resulting data is normalized concerning the sample's area. Results and Discussion Figure 2 shows SEM images acquired for the two phase regions, and makes clear the existence of different regions in the samples deposited with the proposed two phase system, as outlined in Figure 1.The lightest area of the film (region I) during deposition is exposed only to the top phase (A), whereas the darkest area (II) comes into direct contact with the denser phase (B) and passes through the top phase.The lightest area (region I) displays sparse structures forming across the substrate's surface, having only interacted with top phase (A), it contains only a small amount of material deposited on its surface.Region II shows a higher volume of material adhered as it interacted with both phases (A and B). SEM images reveal differences in structures formed in regions I and II, as seen in Figure 2, the inset from region I shows a sparse deposition with islands of material adhering to the substrate distant from each other, while in region II the rough surface is a consequence of two layers of deposited material and its interaction with the bottom phase (B).Despite the efforts so that the top phase (A) did not present a high concentration of TiO 2 particles, the diffusion at the interface and the turbulence created during the deposition process ensures that a number of TiO 2 particles are released from the bottom (B) to the top phase (A), where they may remain suspended.More SEM images can be seen in Figure 3 for different samples, as described in Table 1, corresponding to images taken from region I of respective films and show the presence of dispersed structures, probable result of suspended particles in the top phase (A). The lower concentration of TiO 2 particles in the top phase (A) results in the region I described in the diagram of Figure 2, with material deposited in the form of small islands on the substrate surface.During deposition, more prominent (higher) particle structures on the substrate surface grow separated, leaving to a larger surface area, which facilitates solvent evaporation from that region.Thus, the available solvent around this region is drained in the direction of these islands by capillary forces, carrying material for the formation and further growth of such structure 30 .It results in the sparse points observable in Figures 3a to 3d for samples 1l_0.3,2l_03, 1l_0.6 and 2l_0.6 respectively, being more branched in Figures 3b and 3d, as expected with subsequent deposited layers. Figure 4 shows SEM images of region II of samples deposited with two layers and with varying heights for the second phase.Images show that with increasing height of the top phase (A), larger and recognizable structures become sparse as both the density and the thickness are reduced.This goes in accordance with the idea that the top phase helps "wash away" most of the adhered precursor solution back into the beaker, leading to the homogenization of some parts of the surface and growing others. As the volume of solvent adhering to the substrate during deposition is small, due to the low viscosity and high surface tension, the drying and draining mechanisms follow a "capillarity regime" 40 , which usually occurs for slower depositions.In this regime, the evaporation of solvent alters the position of the equilibrium point between the solution flux to the substrate and back to the solution.This is explained in accordance with Figure 5, where a flux diagram on the surface is drawn.As the substrate crosses the top phase, in the pulling step, much of the solution is removed from the substrate, and it is drained back into the beaker, due to the increased surface tension.Thus, the thickness of the deposited film can be reduced and the volume of solvent adhering to the film and the substrate maintains the regime responsible for the growth of the film structures as the capillary regime instead of the draining regime.The lower volume of adhered solvent when exposed to the atmosphere starts to evaporate easily, when compared to the higher adhered volume of solution in the conventional dip-coating method, where the time is increased to allow evaporation of the solvent.This also indicates that a lower deposition rate is not the only way to change the regime, and consequently to change the morphology and structure of films 38 . With the deposition of more layers, other interactions may take place on the film´s surface, leading to the homogenization of the regions that present varying depth, caused by defects such as fractures and cracks, which may avoid the increase in film thickness with the subsequent deposited layers.Thus, regions that carry more material during substrate removal are those with higher capillary interaction, such as cracks, wrinkles and fractures, formed by drying of previous layers, or simply rougher regions of the substrate 41 . Transmittance spectra acquired from region II of the samples are shown in Figure 6, for different samples.The effect of homogenization process may relate a lower transmittance to a higher number of layers, with films of only one layer showing less transmittance due to the growth from the capillarity regime and the diffusion of beams caused by the larger surface area.It is also in good agreement with the higher efficiency observed during photocatalysis process.From the transmittance results shown in Figures 6b and 6c it is possible to conclude that samples with a higher number of layers do not necessarily have a lower transmittance, which would be expected for samples with similar properties, deposited by the conventional dip-coating process.Images of the samples are shown in the supporting information file (Figure S1), as well as more data on transmittance and reflectance (Figure S2). Concerning Figure 6a it is also possible to conclude that subsequent depositions did not adhere efficiently to layers already deposited, resulting in similar transmittances for samples with 1, 2 and 4 layers.The smaller amount of deposited material and the deposition of material being relegated to regions of higher capillary pressure, also prevents the deposition of subsequent layers from growing regions that have already been detached from the film 30,42 .Samples with a higher number of layers showed a superior degree of homogeneity when compared to samples deposited with 1 and 2 layers with different heights of the top phase. X-ray diffraction measurements were performed on the surface of area II of the samples and are shown in Figure 7, regarding the predominant structures in the samples.In general, the diffraction profile showed low crystallinity, and the diffuse shape characteristic of nanoscopic crystallites domain.However, it is still possible to identify the main 15 , it can lead us to confirm that the predominant phase in the films is anatase, following the higher bandgap values.This result is in good agreement with the X-ray data, in spite of the existence of few peaks in the diffractograms.It must be recalled that anatase is a better photocatalyst than rutile [43][44][45][46] . Table 2 shows crystallite size evaluated by the Scherrer equation 47 and lattice distortion, calculated through Equations 2 and 3 respectively.( ) where S is the crystallite size, K is a shape constant, taken as 0.9, λ the equipment's X-ray wavelength (Cu Kα = 0.15405 nm) and β the width at half maximum of a given peak. ( ) where the distortion Δd/d given as a fraction of the width at half maximum of the most intense peak over the tangent at its diffraction angle 48 .Photocatalysis measurements were performed on film samples from the region II, which were cut from the whole sample.Figures 8a, 8c and 8e show the absorbance spectra of the aqueous solution interacting with one layered film samples and 8b, 8d and 8f films with 2 layers films, in the three steps of degradation of methylene blue: before (0 min), during (90 min) and after (180 min) the photocatalysis process, with values normalized to the original aliquot spectrum.A figure showing the absorption curves of methylene blue obtained after 180 min of photocalytic process for all the samples is given in the supporting information file (Figure S3).As expected, the shape follows the band absorbance of MB spectra.In Figure 9a the percentage of degraded methylene blue is plotted along the time axis which allows evaluating and comparing the efficiency of the different samples.To help the comparison between samples, Figure 9b shows a column chart of degradation efficiency after 180 min for different films.It can be observed that film samples deposited via the two-phase system show a higher efficiency process on breaking methylene blue molecule.According to Figure 9, samples deposited by the method described here were more efficient in general, slightly dependent on the height of the second phase, and samples with two layers were found to be more efficient than samples with only one layer.This higher efficiency is interpreted as a consequence of the large surface area of the films, achieved by the method, related to the resulting morphology from deposition (Figure 4) and changes in the surface availability and energy of the TiO 2 film 1, 45,46,49 . Considering the smaller volume of material used for deposition in the proposed method, and the more transparent films as inferred from the transmittance data (Figure 6c), the absorption of photons and the creation of electron-hole pairs should occur at a lower rate, as fewer photons are absorbed by the samples The same may happen with the spectra of samples deposited with a top phase of 0.3 cm (Figure 6b), if considered that a large part of the transmittance spectrum to have been scattered or reflected. The light scattering coincides with the structures that can be observed on the SEM images of sample surface in Figure 3.These structures, through the 'capillary regime' favor growing up the substrate, creating vertical configurations.The higher photocatalytic efficiency can be attributed, also, to the growth of anatase, besides the low crystallinity 50 , since a higher concentration of grain boundaries are related to higher surface area, and then, higher rates of oxygen, water and OH adsorption are expected 45 . The larger area created in a deposited film with lesser amount of material, is due to the concentration of deposited material in relation to the solvent.As the latter is mostly drained in the process by the pressure of the top phase, the remaining adherent to the substrate inhibits the growth of crystallites by evaporating quickly and not allowing the ramification of structures to larger areas.Following this change in morphology with increase in surface area, another possible consequence of the method that can lead to an improvement in photocatalysis efficiency, is the reorientation and exposure of different species and faces to the medium in which the film is immersed 51 .Low adsorption of methylene blue molecules was observed on the surface of the samples in general, prior to photocatalysis, since the difference in concentration of methylene blue between a control solution and solutions exposed to films kept in darkness for 2 h being around only 2% of the initially prepared solution concentration. Another feature of the proposed method was briefly analyzed, and concerns the function of the top phase as a "cover" for the first.Assuming that the second phase is composed of a denser and more viscous phase, evaporation and solvent loss would result in the formation of a gel, difficult to deposit by the dip-coating process.Three samples of the solutions were separated, one composed only of the first phase, one composed of both, mixed, and one composed of both already separated.It was observed, during storage, that the phase separation of the second sample occurs, and as time pass, the "capped" solution by the top phase took longer to dry and turn into a gel.This is attributed to the faster loss of volume of more volatile components, such as water, alcohol and possible organic components (remnants of TTIP reactions) influenced by the atmospheric oxygen, when the sample is not capped. Conclusions The system proposed here may lead to improvements on the conventional dip-coating process concerning applications of TiO 2 thin films.The anatase main crystalline structure of the analyzed samples leads to distinct morphologic structure and optical transmittance spectra depending on the height of the top liquid solution layer.It is clear that it influences the efficiency on the degradation of methylene blue upon photocatalysis analysis. Although the interpretation of results may be rather complex, it is possible to conclude in this work that new features on the deposition process are worth investigation, in order to adapt the method for new and for known applications, increasing efficiency.The interface created by the precursor solution in the two phase procedure deserves attention as it may lead to further developments in doping, thickness control, surface morphology and possibly more ways to further engineer thin-film properties to suit a given application. Moreover, applications of this method can still be studied considering that it is the first time a deposition system like this is proposed.Simultaneous deposition of multiple layers, control of pH in gel formation, separation between precursor solution and air to avoid oxidation, finer control of thickness and draining regime, these are some aspects the proposed method that could be deeply investigated for and applied to.In this paper, the issues have been on changes of basic characteristics and properties. Figure 1 . Figure 1.Proposed method (top) and conventional dip coating process (bottom).The label "A" refers to the floating (top) phase, whereas "B" refers to the denser (bottom) phase.The proposed method creates different regions in the sol-gel solution and deposited film. Figure 5 is adapted from the work of Brinker and Hurd29 , in order to represent the flux diagram for the two phase solution method on the right side, whereas the left side represents the conventional dip-coating method, as proposed by Brinker and Hurd.Then, in the right side, the stagnation point's' approaches the substrate and the surface of Figure 2 . Figure 2. Diagram showing the thin film samples, and the surface SEM of the two distinct regions (Images of sample 2l_ 0.3).Image magnification: 4000x. Figure 3 .Figure 4 . Figure 3. SEM images of films surface in regions exposed only to the top phase.Magnification 4000x Figure 5 . Figure 5. Flux diagram on the surface of the depositing thin film in the dip-coating process.Left: Conventional dip-coating method; Right: Two-phase method. Figure 6 . Figure 6.Transmittance spectra through region II of deposited films: (a) with the second phase removed; (b) with second phase of 0.3cm and (c) with second phase of 0.6cm. Figure 9 . Figure 9. (a) values of peak absorbance of methylene blue along distinct time steps.The lines are just guides to the eyes; (b) Column chart that compares degradation efficiency of the photocatalysis after 180 min for different films Table 1 . Thin films prepared with distinct number of layers and height of the top phase Table 2 . XRD analysis of films prepared with 4 layers of TiO 2 with varying height of second phase
2021-08-27T17:10:05.584Z
2021-07-26T00:00:00.000
{ "year": 2021, "sha1": "84aa73cd390135ea1200cf55435bb0bc9c9b1b23", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/mr/a/6ScdBj97bnY8p3TSWGFmbRL/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b53a788530b455bc590f69c28801580bae565500", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
73547695
pes2o/s2orc
v3-fos-license
Absence of Non-Trivial Supersymmetries and Grassmann Numbers in Physical State Spaces This paper reviews the well-known fact that nilpotent Hermitian operators on physical state spaces are zero, thereby indicating that the supersymmetries and"Grassmann numbers"are also zero on these spaces. Next, a positive definite inner product of a Grassmann algebra is demonstrated, constructed using a Hodge dual operator which is similar to that of differential forms. From this example, it is shown that the Hermitian conjugates of the basis do not anticommute with the basis and, therefore, the property that"Grassmann numbers"commute with"bosonic quantities"and anticommute with"fermionic quantities", must be revised. Hence, the fundamental principles of supersymmetry must be called into question. Introduction In physics, materials in the universe are classified into bosons and fermions, which are described by quantum fields on the space-time. In order to explain fermions with classical theories, physicists use the ideal quantities "Grassmann numbers". However, few textbooks explain the domain of "Grassmann numbers". Nevertheless, these "Grassmann numbers" are treated as well-established mathematical objects in modern theoretical physics. It is difficult to find a discussion on the domain of the "Grassmann numbers". The book "Supermanifolds" by de Witt [1] begins with a description of (infinitedimensional) Grassmann algebras. According to his definition, a scalar field in a supersymmetric theory consists of infinitely many real scalar fields in the same sense as ordinary field theories, which sounds somewhat strange. As regards Lie algebra, commutation relations between bases, L, include all the information on the Lie algebra. Hence, physicists believe that the anti-commutation relations between θ i define the algebra. However, in the case of "Grassmann numbers", θ i are not described as forming a basis but are instead referred to as "parameters" or "variables". In contrast, for a Lie algebra, su(2), typically σ 1 , σ 2 , σ 3 are not referred to as "Lie parameters". They are not parameters but invariable bases of the algebra. Thus, components and bases are confused here. The commutation relations between the generators of a Grassmann algebra and their Hermitian conjugates show that the appearance of Clifford algebras is natural in the unitary extension of Grassmann algebras. In addition, supersymmetries are imagined as symmetries whose "parameters" are "Grassmann numbers". The Grassmann algebra generated by an n-dimensional complex vector space has n k=0 n C k = 2 n dimensions. Therefore, elements of the Grassmann algebra can be explained using these 2 n components. In addition, any vector spaces over a complex number field, C, are vector spaces over a real number field, R. For instance, C n ≃ R 2n are vector spaces over R. Clearly, this statement is trivial, because R is a subfield of C. Hence, elements of a Grassmann algebra generated by an ndimensional complex vector space can be explained by 2 n+1 real components. Such a realization tells us that if "Grassmann numbers" are elements of a Grassmann algebra, the combination, ǫQ, of the "Grassmann numbers", ǫ, and "infinitesimal generators of supersymmetries", Q, satisfy ordinary commutation relations without anti-commutation relations, and can be explained as a linear combination with real coefficients. In addition, many physicists hesitate to treat the "Grassmann numbers" in terms of Grassmann algebra directly, which may be because of the confusion of components with bases. The author believes that the solid construction of theories is the most important aspect of theoretical physics, and therefore a comprehensive understanding of the "Grassmann numbers" is required in order to accurately make use of this theory. As will be described below, through examination of the properties of Grassmann algebras and "Grassmann numbers", the author suggests that the undefined tool "Grassmann numbers" should be reconsidered. In the next section, it is proven that supersymmetries and "Grassmann numbers" vanish on physical state spaces. Then, in the third section, an example of commutation relations between the generators of a Grassmann algebra and their Hermitian conjugates is shown. Reconsideration of the property of the anti-commutation relations of "Grassmann numbers" is proposed. In addition, doubt is cast on the basic construction of supersymmetry. Nilpotent Hermitian Operators on Physical State Spaces Hilbert spaces are complete vector spaces over C endowed with a Hermitian inner product. 'Completeness' relates to the Cauchy sequence convergence property, however, this is not a central topic in this paper. A positive definite Hermitian inner product, represented by ·|· , is a Hermitian inner product which satisfies the following conditions: (i) ψ|ψ ≥ 0 for all ψ ∈ H, and (ii) ψ|ψ = 0 ⇒ ψ = 0. Hilbert spaces with positive definite inner products are called physical state spaces. Let us prove the nonexistence of nontrivial, nilpotent Hermitian operators on physical state spaces. Theorem 1 Let H be a vector space over C endowed with a positive definite inner product, ·|· . Suppose that Y is a nilpotent Hermitian operator acting on H, i.e. there is a natural number n, such that Y n−1 = 0 and Y n = 0, and Y † = Y . Then, Y = 0. ∵ Suppose that n ≥ 2. Y n−1 = 0 and Y n = 0 implies that Here, the first equality is obtained by taking the Hermitian conjugate and the second equality is supported by the Hermitian property, Y † = Y . The third equality is obtained from the expression Y n = 0, and the final statement, Y n−1 ψ = 0, is derived from the positive definite inner product. Because Y n−1 ψ = 0 for every state, ψ, Y n−1 must be zero. This contradicts the hypothesis and, therefore, we obtain n = 1. Equivalently, Y = 0. Next, let us consider the operator X, where and α = 1, 2. This kind of operator appears in the exponents of supersymmetries [2]. Suppose that ǫ 1 and ǫ 2 anticommute with each other, i.e., ǫ 1 ǫ 2 = −ǫ 2 ǫ 1 , and ǫ 1 ǫ 1 = ǫ 2 ǫ 2 = 0. Usually, so-called "Grassmann numbers" have the property that ǫ α commutes with "bosonic quantities" and anticommutes with "fermionic quantities". If we assume that the Hermitian conjugation transforms "bosonic quantities" to "bosonic quantities" and "fermionic quantities" to "fermionic quantities" (otherwise Hermitian conjugation is nothing but supersymmetry), then ǫ α should commute with (ǫ α Q α ) † . From the ansatz of the anti-commutation, we can derive ǫ α ǫ β ǫ γ = 0, which indicates that X is nilpotent. Theorem 2 is the direct conclusion of the property 'ǫ α commutes with "bosonic quantities" and anticommutes with "fermionic quantities" '. ∵ (i) It is obvious from the definition that X = X † . Therefore, X is a nilpotent Hermitian operator and, from theorem 1, we conclude that X = 0. (ii) X = 0 is equivalent to ǫQ = −(ǫQ) † . Again, iǫQ is a nilpotent Hermitian operator and ǫQ = 0. Corollary 1 raises a question not only about supersymmetries but also concerning "Grassmann numbers". ∵ Assume that ǫ satisfies the condition; 'ǫ commutes with "bosonic quantities" and anticommutes with "fermionic quantities"'. Z = ǫ + ǫ † is a nilpotent Hermitian operator and we therefore obtain Z = 0, while ǫ † + ǫ = 0 shows that iǫ is nilpotent and Hermitian. Hence, ǫ = 0 is required. Therefore, in order to treat "non-trivial Grassmann numbers", ghost states, which are defined as states whose existence probabilities are negative, are required. As it is apparent that "Grassmann numbers" strongly depend on the representation H, the fundamental principles of supersymmetry are called into question. In order to examine the difference between "Grassmann numbers" and elements in the unitary extension of Grassmann algebras, a basic example of a Grassmann algebra generated by a two-dimensional vector space over C is given in the next section. Hermitian conjugation of a Grassmann algebra Let V be a two-dimensional vector space over C, V ≃ C 2 , and suppose that V is endowed with a positive definite Hermitian inner product, ·|· . An orthonormal basis, {e 1 , e 2 }, is fixed in this section, such that e i |e j = δ ij . Now, the Hermitian inner product can be expressed using the expansion coefficients with respect to the basis. Suppose that two vectors, v, w ∈ V , are expanded with respect to {e 1 , e 2 }, Then, the inner product of v and w can be written as Next, we wish to examine the Grassmann algebra, A, generated by V . The algebra A is a vector space over C. The product in A is denoted by the wedge product, ∧, and, thus, the product of v, w ∈ A is represented by v ∧ w. The multiplication is bilinear and satisfies the associative and distributive laws. In addition, the basis of A can be constructed from the basis of V , so that {1, e 1 , e 2 , e 1 ∧ e 2 } form a basis of A. Note that ω = e 1 ∧ e 2 is called the volume form. If X, Y ∈ A are expanded with respect to the previous basis, we have It is apparent that the term proportional to the volume form in C(X) ∧ * Y gives a Hermitian form on A, and hence As Then, the relationships between the coefficients can be expressed as Here, Eq. 22 implies that the previous Hermitian inner product is preserved under the transformation by U, and therefore Next, let us consider the representation of A on itself. Assume that the repre- Here, two matrices are the representation matrices of the basis e 1 and e 2 . The multiplication relations of F 1 and F 2 are These indicate that F 1 and F 2 anticommute with each other, i.e., F 1 F 2 = −F 2 F 1 , and it can be easily confirmed that F 2 1 = F 2 2 = 0. Moving on to the Hermitian conjugates of F 1 and F 2 , it is apparent that the inner product on the algebra A is equivalent to the standard Hermitian inner product on C 4 . Therefore, the Hermitian conjugation of the linear transformation on A can be obtained by taking the complex conjugates and transpositions of F 1 and F 2 . We obtain Analysis of the multiplicative relation between F 1 , F 2 , F † 1 , and F † 2 reveals that the algebra is not a Grassmann algebra but is actually a Clifford algebra, with , and F † 2 should be treated as "Grassmann odd quantities", but they do not simply anticommute with each other. The remaining multiplication relations are F † 1 F 2 + F 2 F † 1 = 0 and the Hermitian conjugation. We obtain By taking linear combinations of F s, the generators γ † a = γ a , (a = 1, 2, 3, 4) of the Clifford algebra are obtained, with and the multiplication relations of the γ's and their Hermitian conjugates are It has therefore been shown that the extension of a Grassmann algebra with its Hermitian conjugates results in a Clifford algebra. Clearly, this result strongly suggests that we reconsider the anti-commutation relations between "Grassmann numbers" and their Hermitian conjugates. Discussion The construction of the inner product in the previous section is obtained using the Hodge dual operator. As shown, the algebra closed under the Hermitian conjugation is a Clifford algebra rather than a Grassmann algebra. As Clifford algebras are closely related to rotations, fermions may be related to rotations of certain infinite dimensional spaces, such as state spaces. The Grassmann algebra, A = C ⊕ V ⊕ (V ∧ V ), constructed from a twodimensional vector space, V , over the complex number field, C, is split into two parts: "the bosonic part", A 0 = C ⊕(V ∧V ), and "the fermionic part", A 1 = V . For v, w ∈ A 1 , v ∧w = −w ∧v. Let us consider n "Grassmann numbers", (θ 1 , θ 2 , · · · , θ n ). If all θ i are elements of A 1 and the multiplication of θ i is identified with the wedge product of the Grassmann algebra, all θ i satisfy the condition that θ i θ j + θ j θ i = 0. Of course, one can consider various Grassmann algebras constructed from several vector spaces, and certain properties of θ i depend on the dimensions of the generating vector spaces. For example, in the case of dim C V = 2, the condition, θ i θ j θ k = 0, is satisfied for every "configuration (θ 1 , θ 2 , · · · , θ n )", whereas for dim C V = 3, configurations with θ i θ j θ k = 0 (n > 2) exist. An essential question is posed as to whether "Grassmann numbers" are elements of any of these possible Grassmann algebras. To answer this, we begin with the definition of a Grassmann algebra, as we know that the dimension of the generating vector spaces is required. It is an assumption that the algebra, A, acts on the Hilbert space H as, without this condition, the multiplication of "Grassmann numbers" θ 1 , · · · , θ n with any states cannot be considered from the outset. In other words, it is conjectured that a representation R : A → gl(H) is given. The conjecture is appropriate. This conjecture is equivarent to the assumption where supersymmetry is considered to be a symmetry of the Hilbert space. Let X = ǫQ+h.c. and U = exp(X) = 1+X +X 2 /2+· · · . The action of U, X and Q α on any states |ψ should be considered under the assumption; U|ψ , X|ψ , Q α |ψ . X|ψ = ǫ α Q α |ψ implies that the multiplication of "Grassmann numbers" ǫ α on the state is performed in this expression. Otherwise nobody can consider the transformation of states by those supersymmetries. Hence people who claim that the multiplication of "Grassmann numbers" θ 1 , · · · , θ n with any states cannot be considered from the outset are also claiming that supersymmetries are not symmetries of the Hilbert space. Constructing spaces which have the action of a Grassmann algebra is easy. Let us show that below. For any vector space H over C, by considering the extension of the coefficients we obtain a vector space H θ , which has the action of A. The set, H θ , is actually a vector space over C, and it is easily shown that every finite-dimensional vector space over C has a positive definite Hermitian inner product. If there is a positive definite inner product on V which generates A, a positive definite inner product on A can be defined using the Hodge dual operator. Thus, H θ becomes a vector space with a positive definite inner product. Conclusion This paper concludes that the property stating that a "Grassmann number", ǫ, commutes with "bosonic quantities" and anticommutes with "fermionic quantities" is not appropriate to define the idea of "Grassmann numbers" and, as a result, doubt is cast on all calculations involving Grassmann numbers. The property is different from that of Grassmann algebra and, in particular, the foundation of supersymmetry must be reconsidered.
2014-08-23T16:21:30.000Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "807a7397de046e5deef3c8795c2ebcf5156335fb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "807a7397de046e5deef3c8795c2ebcf5156335fb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
219986510
pes2o/s2orc
v3-fos-license
Ligand‐Controlled Palladium‐Catalyzed Carbonylation of Alkynols: Highly Selective Synthesis of α‐Methylene‐β‐Lactones Abstract The first general and regioselective Pd‐catalyzed cyclocarbonylation to give α‐methylene‐β‐lactones is reported. Key to the success for this process is the use of a specific sterically demanding phosphine ligand based on N‐arylated imidazole (L11) in the presence of Pd(MeCN)2Cl2 as pre‐catalyst. A variety of easily available alkynols provide under additive‐free conditions the corresponding α‐methylene‐β‐lactones in moderate to good yields with excellent regio‐ and diastereoselectivity. The applicability of this novel methodology is showcased by the direct carbonylation of biologically active molecules including natural products. Introduction a-Alkylidene-b-lactones have emerged as important synthetic targets due to their occurrence in av ariety of natural compounds and biologically active molecules (Scheme 1). [1] Fore xample,l actones A [1a-c] and B [1d] were separated from Grazielia species and Disynaphia multicrenulata. Moreover, C exhibits promising inhibitory activities against certain fungal pathogens. [1e] In addition, a-alkylidene-b-lactones D and E were studied as potent and selective inhibitors for serine hydrolase ABHD16A. [1f-g] Following this work, F also served as as ensitive probe for detecting ABHD16A activity in mouse brain membrane lysates. [1g] Apart from these medicinal applications,s pecifically a-methylene-b-lactones attracted interest in material sciences;f or example,w elldefined copolymers with controllable molecular weight and narrow polydispersity were prepared by ring-opening poly-merization. Here,the vinylidene groups of the lactones could be further functionalized, producing well-defined blocks with designable segments. [2] In organic chemistry,t he high degree of functional groups in ac ompact manner allows diverse synthetic utilizations and makes this class of compounds interesting building blocks. [3] More specifically,owing to their inherent strain in the four-membered ring, they readily undergo ring opening reactions with aw ide range of nucleophiles by either acyl C À Oo ra lkyl C À Ob ond cleavage. [3f,g] Besides the usual electrophilic sites of the carbonyl and oxetane carbon atoms contained in b-lactones, Michael-type additions of nucleophiles and radicals at the methylene carbon atom are feasible [3h] and offer attractive possibilities for preparative purposes.N otably,t he carboxyactivated exo-methylene group serves as reactive dienophile, while the a,b-unsaturated carbonyl moiety promises potential as heterodiene for [4+ +2]-cycloadditions. [3e] Moreover, amethylene-b-lactones constitute convenient allene equivalents as demonstrated in the decarboxylation to allenes on thermolysis. [3b,e] Considering the value of a-alkylidene-b-lactones in organic synthesis,s ignificant interest in their preparation exists. [4] Because of the dense functionalization, mainly special synthetic methods have been developed for this class of compounds including [2+ +2]-cycloaddition of ketenes, [3n, 4a-c] lactonization of b-hydroxycarboxylic acids or derivatives, [3d, 4d-g] elimination of selenoxide from a-methyl-substituted lactones, [3b] and deoxygenation of b-peroxylactones. [3f, 4g-h] More recently,a lso rhodium-or palladium-catalyzed carbonylation of alkynols have been disclosed. [5] Thus, a-(triorganosilyl)-methylene-b-lactones (Scheme 2a), [5a,b] a-(alkoxycarbonyl)-methylene-b-lactones (Scheme 2b), [5c-d] and (Z)-a-chloro/bromo-alkylidene-b-lactones (Scheme 2 b) [5e-f] were obtained. Notably,t he synthesis of parent amethylene-b-lactones gained less success due to the highly reactive exo-methylene double bond. In fact, to the best of our knowledge,t here is only one reported example for the carbonylation of 1-methyl-2-butyn-1-ol described, leading to 4,4-dimethyl-3-methyleneoxetan-2-one in 5% yield with probably polymeric esters as other products through ap alladium-catalyzed process (Scheme 2c). [5g] Despite these problems,w et hought the cyclocarbonylation of propargylic alcohols in the presence of an improved catalyst would offer amost straightforward and atom-efficient access to these products.T hus,w eb ecame attracted by this challenge.B ased on our interest in the development of carbonylation reactions, [6] herein, we report the first general and highly selective Pd-catalyzed carbonylation of propargylic alcohols to provide af amily of new a-methylene-blactones (Scheme 2d). Results and Discussion At the beginning of our studies,1-ethynyl-1-cyclohexanol 1a was chosen as model substrate.T oi dentify as uitable catalyst system, av ariety of ligands (in the case of diphosphine ligands 2mol %, in the case of monophosphine ligands 4mol %) were tested in the presence of [Pd(p-cinnamyl)Cl] 2 ( Figure 1). Initially,t he reactivity of bidentate phosphines L1-L5 with different backbones and chelating units was evaluated. When L1 (Xantphos), L2 (BINAP), L4 (d t bpx) and L5 were applied, almost equal amounts of the desired blactone 2a and butenolide 3a were obtained. L3 (DPPF) proved to be not suitable at all, leading to 3ain 55 %with 3% yield for 2a.N op rogress was achieved when monodentate ligand L6 (BuPAd 2 )was used in this reaction. However, in the presence of L7 (P t Bu 3 )aslightly improved regioselectivity (71/29) was obtained. Based on this result, we assumed that tert-butyl groups may have apositive influence on the desired branched selectivity.T hus,o ther monodentate ligands L8-L11 were tried with different backbones bearing tert-butyl substituents on the phosphorus atom. Indeed, more-sterically hindered L8 (JohnPhos) gave 80/20 selectivity and 57 %yield of 2a.W hent he N-phenyl-pyrrole-based ligand L9 was applied, as imilar selectivity (77/23) and ah igher yield (67 %) were observed. To further increase the steric bulk of the ligand, we introduced substituents on the ortho position of the phenyl group resulting in the new ligand L10,w hich was prepared in good yield in two reaction steps (see Supporting Information Scheme S1 for detail). With this ligand in hand, the regioselectivity could be improved to 97/3, albeit the reactivity was affected negatively (30 %y ield of 2a). Finally, to our delight, the 1-(2,6-diisopropylphenyl)-1H-imidazole- With optimized reaction conditions established, arange of easily available and structurally diverse propargylic alcohols were examined (Table 1). It is worthy to note that all of the desired a-methylene-b-lactones were obtained in isolated yield with excellent regioselectivity and diastereoselectivity. Notably,the latter is likely to be controlled by the substrates. Thea lkynols with different substituents (dimethyl, phenyl, ketal) on the 3-or 4-position of cyclohexyl group were transformed into the corresponding products 2a-2e in yields of 60-97 %a nd excellent selectivity.T his protocol can be readily scaled-up to carbonylation of 1.0 gram of 1a.T his reaction proceeded smoothly,p roviding 2a in 92 %y ield. Substrates 1f-1i containing heteroatoms (oxygen, sulfur, nitrogen) proved to be viable too and gave the corresponding b-lactones 2f-2i in 81-98 %y ields with > 20/1 selectivity. Five-membered ring substrates such as 1j can be also applied successfully in this carbonylation reaction (83 %yield of 2j). In case of the carbonylation of the 1-ethynylcyclododecan-1ol 1k,the use of L10 instead of L11 led to ahigher yield of 2k (88 %). Noncyclica lkynols 1l-1r bearing different alkyl and benzyl groups underwent lactonization smoothly and gave the desired products 2l-2r in 38-90 %y ields.B yi ncreasing the catalyst loading (5 mol %Pd(MeCN) 2 Cl 2 ,30mol % L11), the corresponding products 2m and 2q were isolated in 90 %and 49 %yield, respectively,with > 20/1 regioselectivity.When amonoalkyl-substituted propynyl alcohol 1s was subjected to the optimized conditions, 2s was obtained, albeit in al ower isolated yield. On the other hand, starting from a-monoarylsubstituted alkynol 1t,c arbonylation proceeded at increased catalyst loading to give 2t in 58 %y ield. Interestingly, dicarbonylated product 2u was obtained directly in 56 % isolated yield by carbonylation of dialkynol 1u.I ts hould be noted that the synthesis of such multiply b-lactone is not an easy task. In fact, to our knowledge no such transformation has been described yet. Thei mportance of this novel methodology is showcased by the late-stage modification of biologically active and natural products, [8] which provides easy access to diverse amethylene-b-lactones,highlighting the substrate scope of this protocol and its potential utility in organic synthesis ( Table 2). Due to the poor solubility of some of the complex substrates, typically 5mol %o fp alladium catalyst was applied. Under otherwise similar conditions,i na ll cases the reactions proceeded well with excellent regio-and diastereoselectivities.More specifically,tropinone-derived propargylic alcohol 1v delivered the desired product 2v,w ith good efficiency (81 %y ield). Pentoxifyllin, ad rug with anti-inflammatory properties,was transformed to the corresponding product 2w smoothly (85 %y ield). Recently,m uch attention has been paid to steroidal containing spiro-heterocycles for their characteristic physiological activities. [9] Thus,weinvestigated reactions of pharmaceutically relevant steroidal alkynols: ethynyl estradiol, ethisterone,levonorgestrel, and lynestrenol, which are used for contraception and gynecological disorders. All these compounds participated efficiently in this transformation to provide the carbonylative products 2x-2z, 2aa in high yields (85-93 %). Notably,t he molecular structure of ethynyl estradiol derivative 2x was unambiguously confirmed by X-ray structure analysis. [10] Similarly, a-methylene-blactones 2ab-2af derived from other steroid hormones such as dihydrocholesterol, stanolone and epiandrosterone,w ere obtained in 41-87 %y ields with excellent selectivity.M oreover, homopropargylic alcohols also proved to be suitable substrates and afforded the corresponding 5-membered products with excellent regioselectivity (see Supporting Information, Scheme S2). [a] Unless otherwise noted, all reactions were performed in MTBE (2.0 mL) at 100 8 8Cfor 20 hinthe presence of 1 (0.5 mmol), Pd-(MeCN) 2 Cl 2 (1.3 mg, 0.005 mmol), L11 (11.2 mg, 0.03 mmol), and CO (40 bar). Isolated yields were given before the parentheses. The NMR yields (values within the parentheses), regioselectivity of 2/3 and diastereoselectivity of 2 were determined by crude 1 HNMR analyses using dibromomethane as the internals tandard. It should be noted that more than 80 %o ft he here described a-methylene-b-lactones are prepared for the first time.T his clearly demonstrates the synthetic value of this novel methodology.W ea ssumed that our new products can be conveniently used as interesting basic building blocks. [3] Thus,t os howcase their utility,s elected follow-up transformations were conducted by using 2b as the starting material (Scheme 3). To illustrate the possibility to prepare functionalized acrylic acid derivatives, a-methylene-b-lactone 2b readily underwent ring opening with benzylamine in the presence of Pd(OAc) 2 ,a ffording b-hydroxy amide 4 in 61 % yield. Furthermore, a-methylene-b-lactones provide an easy and efficient entry into a-alkylidene-b-lactones applying cross metathesis in the presence of Grubbs II catalyst. Indeed, ag ood yield of 5 was obtained with high Z-selectivity (stereochemistry determined by NOESY study,s ee Supporting Information for detail). Particularly,t his route allows for the efficient preparation of focused libraries of b-lactones, which have found use as biological research probes and therapeutic agents. [11] Addition of carbon-or hetero-nucleophiles gives access to a-alkylated b-lactones.Exemplarily,the Rh-catalyzed conjugate addition of phenylboronic acid to 2b provided 6 in 40 %y ield and the treatment of 2b with thiophenol and triethylamine provided a-(thiomethyl)-blactone 7 via an ucleophilic conjugate addition. Finally,f ourmembered thiolactones can be made in af acile manner by employing Lawessonsr eagent. Thes ynthesis of a-methylene-b-S-thiolactone 8 illustrated the diverse possibilities for the construction of novel sulfur heterocycles. Regarding the mechanism of this novel carbonylation reaction, in principle two main pathways are possible (Scheme 4a): 1) Initially,the active palladium hydride species I could be generated in situ by the combination of palladium precursor with phosphine ligands, [12] in which an excess of phosphine ligand (L11)i sn eeded to reduce the initial Pd II Table 2: Pd-catalyzed cyclocarbonylation of alkynols derived from biologically active and natural products. [a] [a] Unless otherwise noted, all reactions were performed in MTBE (2.0 mL) at 100 8 8Cfor 20 hinthe presence of 1 (0.1 mmol), Pd-(MeCN) 2 Cl 2 (1.3 mg, 0.005 mmol), L11 (11.2 mg, 0.03 mmol), and CO (40 bar). Isolated yields were given before the parentheses. The NMR yields (values within the parentheses), regioselectivity of 2/3 and diastereoselectivity of 2 were determined by crude 1 HNMR analyses using dibromomethane as the internals tandard. Angewandte Chemie Research Articles 21588 www.angewandte.org precursor.A fter coordination of the alkyne to this complex followed by migratory insertion into the Pd À Hb ond, the corresponding alkenyl-Pd complex II should be obtained, which is transformed into the corresponding acyl complex III via CO coordination and insertion. Finally,i ntramolecular nucleophilic attack of hydroxyl on the acyl carbonyl leads to the formation of the desired lactone and regeneration of the [Pd-H] + species.Alternatively,the Pd II precursor is reduced in situ to aP d 0 species (probably by an excess amount of phosphine ligands). ThePd 0 species undergoes insertion into the oxygen-hydrogen bond of alkynol affording the corresponding alkoxypalladium complex. Then, insertion of CO into palladium-oxygen bond would give the Pd acyl species. Intramolecular addition of the palladium hydride to the triple bond would form metallacycle complex, which leads to the formation of the desired lactone and regenerates the catalyst (see Supporting Information, Scheme S4). In order to differentiate between these two possibilities,c ontrol experiments were performed. As shown in Scheme 4b (entry 1), the carbonylation of propargylic alcohol 1a was also carried out with aPd 0 pre-catalyst. However,inthe presence of Pd(dba) 2 under the standard reaction conditions,n oc onversion was observed. In contrast, using Pd(dba) 2 in the presence of 2mol %o fhydrochloric acid gave the desired product 2a in 98 %yield (Scheme 4b,entry 2). These experiments provided clear evidence for am echanism involving catalytically active palladium hydride species.A lthough the detailed reaction mechanism of the cyclocarbonylation of propargylic alcohols remains to be further elucidated, based on our previous studies on alkoxycarbonylations [6b, 13] as well as mechanistic studies by Cole-Hamilton, Drent and Sparkes, [14] it is most likely that this reaction goes through the Pd hydride mechanism shown in Scheme 4a. [15] Conclusion In summary,w ed eveloped the first catalyst system for ag eneral and selective cyclocarbonylation of alkynols to produce synthetically useful a-methylene-b-lactones.B y applying ad istinctive ligand, aw ide range of propargylic alcohols was efficiently transformed into the corresponding amethylene-b-lactones in good yields (up to 98 %) with high regio-and diastereoselectivity (> 20/1). Thea pplicability of this methodology is specifically highlighted by the functionalization of biologically active and natural molecules.C ombining this novel procedure with established functionalizations allows for an efficient preparation of privileged blactone scaffolds.T his efficient procedure features the following advantages:h igh atom economy,a dditive free reaction conditions,a vailability of substrates and obtained excellent selectivities.I tc omplements the current methodologies for carbonylations in organic synthesis as shown by the synthesis of 30 products;t he vast majority of them are new.
2020-06-24T13:06:59.925Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "6c29f4e04aa87a2be6a07eb6bd8d5067d95c8537", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202006550", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ee24a3b5568c628f9386dea18172349c44e9712f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
121009475
pes2o/s2orc
v3-fos-license
Shear rheology of a cell monolayer We report a systematic investigation of the mechanical properties of fibroblast cells using a novel cell monolayer rheology (CMR) technique. The new technique provides quantitative rheological parameters averaged over ∼106 cells making the experiments highly reproducible. Using this method, we are able to explore a broad range of cell responses not accessible using other present day techniques. We perform harmonic oscillation experiments and step shear or step stress experiments to reveal different viscoelastic regimes. The evolution of the live cells under externally imposed cyclic loading and unloading is also studied. Remarkably, the initially nonlinear response becomes linear at long timescales as well as at large amplitudes. Within the explored rates, nonlinear behaviour is only revealed by the effect of a nonzero average stress on the response to small, fast deformations. When the cell cytoskeletal crosslinks are made permanent using a fixing agent, the large amplitude linear response disappears and the cells exhibit a stress stiffening response instead. This result shows that the dynamic nature of the cross-links and/or filaments is responsible for the linear stress-strain response seen under large deformations. We rule out the involvement of myosin motors in this using the inhibitor drug blebbistatin. These experiments provide a broad framework for understanding the mechanical responses of the cortical actin cytoskeleton of fibroblasts to different imposed mechanical stimuli. Introduction The cytoskeleton is an intricate network of protein filaments [1,2] which controls the mechanical properties of eukaryotic cells. It is made of three types of filaments: rather flexible intermediate filaments, semiflexible actin filaments and comparably stiff microtubules. In highly motile cells like fibroblasts, actin filaments form a network below the outer cell membrane. This network is largely responsible for vital biomechanical functions, among them the control of cell shape, locomotion and division. The actin cytoskeleton itself is a complex entity. The filaments are highly dynamic and undergo rapid polymerisation or depolymerisation in response to chemical or mechanical stimuli, and in conjunction with polymerisation controlling proteins. The filaments are cross-linked by a large variety of protein molecules, some of which form dynamic cross-links whose properties are controlled by biochemical factors. Some of these cross-linkers, like the motor proteins myosin-II, possess the ability to generate active forces and relative movement between filaments [1,2]. The actin filament dynamics-polymerisation driven as well as motor driven-are sustained by a steady input of chemical energy derived from the hydrolysis of ATP molecules [1,2]. All these features make the cell cytoskeleton an active, complex gel. Depending on the mechanical and biochemical conditions, it has the ability to sustain large elastic stresses, can generate internal active stresses and flow, and can undergo rapid remodelling through filament reorganisation. These unique features makes the systematic rheological investigation of the cell cytoskeleton fascinating as well as challenging. 3 In recent years, important advances have been made in the investigation of the passive rheological properties of live cells occurring at short timescales (see review [3]). Microrheology experiments, which probe at micron scale [4,5] as well as microplate experiments probing isolated, whole cells [6] have shown broad power-law frequency responses. This behaviour is similar to other gel-like materials [7]. Recently, microplate experiments have shown that cells also exhibit a crossover from linear to nonlinear viscoelastic stiffening response as a function of stress, and independent of strain [8]. Experiments using the optical trap technique have shown that non-adherent cells can also exhibit fluid like behaviour at rates which are slow compared to the binding dynamics of cross-link [9]. Furthermore, recent microplate experiments have given clear evidence of viscoplasticity and kinematic hardening responses in fibroblasts, with simultaneous nonlinear stiffening [10]. These results clearly show the richness in the mechanical behaviour of the cell cytoskeleton. These cell mechanical properties often have vital functional implications [2], [11]- [16]. It is clear from the above examples that a comprehensive investigation of cell mechanics cannot be performed by probing only the linear viscoelastic moduli. It requires the systematic application of specific protocols designed to explore nonlinear responses. Cells being living systems, proper design of the experimental protocol becomes a crucial step as well as a challenge. As our results will show, linearity at small amplitudes cannot be taken for granted at all timescales and needs to be investigated. As the living system can undergo significant mechanical transformations in response to the stimuli used for probing its properties, it is essential to explore the time evolution and steady state behaviour of the responses, and the recovery process. Lastly, in order to identify general characteristic cell responses from 'random' cell to cell variations, it is of great advantage to be able to average the data over as many cells as possible. With these motivations, we have performed a broad and systematic investigation of the rheological properties of fibroblast cells using a novel cell monolayer rheology (CMR) technique. This new technique provides a one-shot averaging of rheological data over ∼10 6 cells, making the experiments highly reproducible and reliable. We show that mechanical behaviour of fibroblast cells can be characterised by a set of very robust, general mechanical responses. This article is organised as follows. We first describe the technical details and procedures for performing the novel CMR. Following this, we present a systematic set of rheological measurements on normal fibroblasts aimed at characterising the cell responses to different probing techniques. Standard harmonic oscillation experiments (continuous stimulation) are compared with step stress or step strain experiments to investigate the linear and nonlinear regimes that may arise. Cyclic loading experiments are used to explore the evolution of the cell mechanical properties under constant loading cycles starting from a rest state. Further, we perform constant strain rate experiments at different strain amplitudes and strain rates to reveal an unexpected linear stress-strain relation appearing at large deformations. We also explore the effect of an underlying preload on the response to small amplitude harmonic oscillations. Finally, the results obtained from normal cells are compared to those from cells whose cytoskeleton has been biochemically modified. These experiments demonstrate the contributions arising from dynamic elements in the cytoskeleton and shed light on to microscopic mechanisms. The systematic nature of our investigation provides a broad, quantitative, understanding of the main mechanical responses of the fibroblast cells under mechanical stimulation. A schematic representation of the rheometer set-up designed to perform CMR. A monolayer of single cells is held between two glass discs in a plate-plate geometry. A commercial rheometer performs shear rheological measurements on the monolayer. The fluid outlet is used to change the cell culture medium in order to treat the cells with biochemical agents which modify the cytoskeleton. CMR The rheological measurements are performed using a Modular Compact Rheometer (MCR-500) from Anton-Paar GmbH, which we modified to enable CMR as shown in figure 1 (European patent application [17]). The measurements are performed in a coaxial plate-plate geometry, where the cells are held between two parallel glass discs. The top plate (50 mm diameter) is attached to the measurement head of the rheometer and the bottom plate (70 mm diameter) to the base via metal mounts. An outlet drilled into the bottom plate is used to exchange the medium between the plates without changing the gap or mechanically disturbing the cells. A microscope which can scan along a radial direction allows optical observation of the cells during measurements and also the estimation of cell density. Images can be recorded on to the computer using a CCD camera at a maximum rate of 15 frames s −1 . The gap between the plates can be adjusted to within ±1 µm. The temperature of the sample is maintained using a Peltier unit at 25 ± 0.1 • C. A picture showing the modifications to the set-up is shown in figure 2. Performing rheological investigations on a monolayer of isolated cells involved several technical and procedural challenges. First of all, the gap between the plates has to be about the same as the size of a single isolated cell, which is only about 10 µm. Moreover, since cells are very soft objects, the total measurement area of the plates must be of the order of 20 cm 2 for the rheometer to be able to resolve the torques. One of the major problems then is in achieving the required parallelity between the two plates of the rheometer. The opposing faces must be parallel Photograph of the CMR set-up. The levelling screws (±1 µm precision) are used for fine corrections to the alignment, if the plates are not parallel after following the procedure described in the text [17]. to each other within 1 µm/10 cm = 10 −5 rad. This problem is solved by using special polished glass plates with a surface flatness of ∼500 nm, plus an appropriate preparation procedure. This and other procedural details are discussed in the following subsections. Mounting and optically adjusting the glass plates The steps involved in mounting the plates and ensuring the parallelity between them are as follows. 1. After thorough cleaning, the top glass plate is carefully placed on top of the bottom glass plate taking care that no dust particles are trapped in between the two plates. The parallelity between the plates can be easily verified by observing the interference fringes formed by a distant, broad, white light source (fluorescent lamp). The interference between the reflections from the two inner surfaces of the plates results in periodic coloured fringes (fringes of equal thickness) [18]. The fringes represent contours of equal gap between the two plates and the spacing between adjacent lines is proportional to the spatial gradients in the gap thickness. Using this method we ensure that the plates, when placed in contact, are parallel to within 1 µm. 2. Once the glass plates are satisfactorily in contact with each other, they are placed on the rheometer without separating them. The bottom glass plate is fixed to the bottom metal mount of the rheometer (see figure 3(a)). 3. The top metal plate of the rheometer is brought down until it makes contact with the top glass plate. This is automatically done by the rheometer as a normal zero-point setting. The metal plate is then locked in position to prevent rotation (figures 3(a) and (b)). 4. The top metal plate is then glued to the top glass plate using the optics grade, ultraviolet light curable adhesive Vitralit 6129 (Panacol-Elosol GmbH) ( figure 3(b)). This is a thick adhesive with a very low thermal expansion coefficient of 36 ppm K −1 , which later can be easily removed by leaving overnight in acetone. The glue is cured by exposure for a few minutes to UV light with a wavelength of 365 nm and an intensity of ∼100 mW cm −2 . This procedure ensures excellent parallelity between the two glass plates as the system is assembled with the two glass plates in perfect contact. A picture of the final set-up is shown in figure 2. The levelling screws shown here are used for any final corrections to parallelity that may be required. The laser can be used to obtain an interference pattern, as an alternate method for aligning the plates. After fixation, the parallelity is usually within 2-3 µm over the entire plate. Once fixed, the angular position of the top plate is locked and only small amplitude oscillations are applied about this position during measurements. The fixed top plate can be lifted up to 1 cm and brought back without any significant loss of parallelity. Coating the plates with adhesion promoting proteins Once the glass plates are positioned, they are coated with the adhesion promoting protein fibronectin (Sigma-Aldrich). For this a final solution of the protein at a concentration of 30 µg ml −1 in phosphate buffer solution (PBS) is prepared. Introducing the fibronectin solution 7 is straightforward. Since the plates are clean and dry, at a spacing of ∼200 µm, capillary forces readily suck the solution into the gap (figure 3(c)). The fibronectin solution is left between the plates for 1 h. Then the top plate is brought down to a nominal gap of 10 µm, which pushes the excess solution out. To remove it completely from the plates, the fibronectin solution is sucked with a pipette. Once the protein is adsorbed, the plates are rinsed three times using PBS by raising and lowering the plates as before. Preparation of cells All experiments are performed on Swiss 3T3 fibroblasts [19,20] from the German Collection of Microorganisms and Cell Cultures (DSMZ, Braunschweig, Germany) [21]. After defreezing cells stored in liquid nitrogen, fibroblasts are grown for at least a week and no longer than 2 months, following standard protocols. The medium used for regular culture is Dulbecco's modified Eagle medium (DMEM), with glucose 4.5 g l −1 and 10% fetal bovine serum (FBS). Experiments are performed in Iscove Medium with HEPES 25 mM and lyso-phosphatidic acid (LPA) (Sigma-Aldrich) at a concentration of 0.5 µM (instead of serum, which contains LPA [22]). Prior to an experiment, cells are detached from the flask by 5 min exposure to a trypsin solution, then centrifugated at 100 g for 2 min in regular culture medium, and finally resuspended at the desired concentration in the medium used for the experiment. All cell culture reagents are from Gibco (Invitrogen, Carlsbad, CA, USA). Loading the cells between the plates The cell suspension is introduced in a similar fashion as the fibronectin solution discussed earlier. Before lowering the top plate, it is mandatory to wait for about 10 min. The reason is that the cells must be allowed to sink down and stick to the bottom plate, or else the outward movement of the liquid induced by bringing the plate down removes some of the cells. This must be avoided, as we need a very high cell density. Waiting for too long before bringing the plate down is also undesirable, as the cells will spread excessively on to the bottom plate and are then unable to attach sufficiently to the top plate. This is known from the single fibroblast experiments described in [8]. After a prudential time, the top plate is brought down until most of the cells are slightly compressed (figures 3(d) and (e)). An image of the cells adhering to the plates, and under slight compression, is shown in figure 4. The compression of the cells is easily observed by measuring the increase in their diameters from the recorded images as is shown in figure 5. The cells are left in this state for 1 h to allow them to adhere to both plates, before measurements are performed. After this period, the fraction of the cells that are firmly adherent to both the plates can be roughly estimated as follows: (i) by applying a step shear strain of about 20% and observing the cell deformation and (ii) by observing the cell geometry at the upper and lower boundaries, where adherent cells have roughly constant diameter. This fraction is typically of the order of 50% of the cells for a gap of about 8 µm and varies a little from cell preparation to preparation and with plate gap. The responses reported here are unaffected by the exact fraction of adhering cells as long as the stresses are above the resolution of the rheometer (also see gap effect in appendix). Since the main focus of the present work is on reproducible, general aspects of nonlinear behaviour, the precise value of the moduli are not relevant. We have therefore refrained from correcting stress values for the fraction of adhering cells. The gap between the plates is such that the smaller cells remain unperturbed. Method for introducing drugs Drugs which alter the cytoskeletal structure can be easily introduced by adding them at a final concentration along the rim of the plates and sucking the medium between the plates through the outlet in the bottom plate (see figure 1). Usually, the gravitational flow is enough to ensure smooth exchange of media within a few minutes without causing any significant flow disturbances to the cells. The cells are observed throughout this process to ensure that the flow does not change the cell density or alter their morphology. Top plate with annular ring If large shear deformations are to be imposed in order to explore the nonlinear regime, a difficulty inherent to the geometry arises. In a plate-plate configuration the deformation field is not uniform: the shear deformation increases proportional to the radius. For most experiments this effect can be neglected, since the dominant contribution to the torque comes from the strongly sheared cells on the outer edge. If better resolution is desired, a simple solution is 9 available. On the top plate an inner circle with a depth of ∼100 µm is carved away with a standard milling machine. Since the cells are at most ∼40 µm large, only those located on the outer, non-processed section will be in contact with both plates and contribute to the measurement. We have performed measurements with such a glass plate to confirm the feasibility of the approach. Qualitatively the responses are the same. Major advantages of the CMR technique The novel CMR technique provides us with the following major advantages. (i) Probing a large number of cells (∼10 6 ) provides a one shot average of the mechanical properties, which can vary strongly from cell to cell in a given population. This makes the experiment highly reproducible from one cell preparation to another. (ii) The nature of cell-substrate adhesion can strongly influence the mechanical properties. Our technique allows the control of cell-substrate adhesion by using functionalised glass plates while at the same time keeping the cell geometry relatively simple (see figure 4). (iii) A large variety of probing techniques-oscillatory probing at varying amplitude or frequency, controlled stress or strain experiments, ramp experiments, and large amplitude deformations are all possible, a necessity for investigating complex materials. Harmonic oscillation experiments We begin our investigation by characterising the response of the cell monolayer to imposed sinusoidal strain oscillations. Our intention is to explore the extent of linearity of the response from this living system. We then study the frequency response of the system at small amplitudes. Amplitude sweep. For strain amplitudes between 1 and 10% and a frequency of 1 Hz, increasing the strain amplitude results in a less and less elastic behaviour as shown in figure 6. The storage modulus goes roughly as G ∼ log (1 / γ ), whereas the loss modulus G becomes plateau-like for γ < 3%. Thus, a linear regime in a strict sense does not exist in this measurement range. Figure 7 shows a frequency sweep data obtained at an amplitude of 2%. Both G and G show a power-law behaviour throughout the studied range. The loss tangent remains approximately constant as shown. The power-law behaviour observed here is in excellent agreement with that previously reported using microrheology and single cell techniques [4,6]. The curves for G and G remain almost parallel throughout the studied frequency range. Stress relaxation, recovery and creep experiments As cells are complex living materials which can actively respond to different types of mechanical stimuli, we now proceed to investigate their response to stepwise loading by performing relaxation and creep experiments. The two types of step loading experiments are then compared to study the linear and nonlinear cell response that may arise. For comparing the relaxation and creep compliance of the cells we have devised the protocol shown in figure 8. First, we impose a 5% step strain and measure stress relaxation during 10 min (S1 in figure 8). Then, a large step strain from γ = 5% to γ = 50% is applied and a second relaxation curve is measured (S2). After 10 min at 50% strain, the monolayer is unloaded in a stepwise fashion and the stress kept at zero for 35 min in order to measure strain recovery (R1). Next, we perform two subsequent creep experiments, the first one at a low stress The stress values are chosen from the previous relaxation experiments so that S1 can be compared with C1, and S2 with C2. Finally, a second full unloading to σ = 0 is performed (R2). The whole procedure takes about 90 min, which is a reasonable time for a measurement at 25 • C. S1 and C1 are expected to be close to a linear regime, while the large steps S2 and C2 should reveal effects of strong nonlinearities. The different responses are discussed and compared below. Figure 9 shows stress relaxation curves obtained at constant strain, after a 0% → 5% and a 5% → 50% strain step. As expected from the frequency sweeps discussed earlier, the relaxation cannot be described by a single exponential and is close to a power-law. Remarkably, no significant nonlinearity is observed as a function of the applied strain amplitude. Normalising by the initial values suffices to collapse the curves. Strain recovery at zero stress. After a deformation and subsequent unloading to zero stress, do irreversible strains remain? To decide on this, we perform the following experiment. After imposing a 50% shear for 10 min, the stress σ is taken to zero and the time evolution of the strain γ is recorded (R1 in figure 8). Figure 10 shows the extent of recovery γ (−0) − γ (t), where the strain prior to the unloading is subtracted. In the first recovery experiment, R1, the strain recovers from 50% to 20% in a 2000 s time period and is still slowly recovering. At this pace, a full recovery would require many hours. The question seems deemed to remain figure 8. The stress σ is divided by its value right after the step, σ 0 , and shown as a function of the time elapsed after the strain step. Other than the prefactor, no significant differences can be observed between the two curves. The apparently larger scatter in the curve at γ = 5% is due to the normalisation by σ 0 . The scatter in the stress σ is essentially constant throughout the experiment. Figure 11 shows the compliances from the creep experiments C1 and C2. The compliance functions are defined as J (t) = (γ (t) − γ (−0))/(σ − σ (−0)), where γ (−0) is the deformation prior to the stress step, σ the imposed constant stress during creep, and σ (−0) the stress prior to the step. Experiment C2, performed at a large stress of σ = 25 Pa, gives a significantly larger compliance for times shorter than 100 s. Remarkably, at longer times it approaches the small-stress compliance. Convolution of the relaxation modulus and compliance. We now compare the responses obtained from the relaxation and creep experiments, to assess the linearity of the response. Linear behaviour is given by Boltzmann's superposition principle: the stress is a linear function of the strain history, where the function G(t) is the relaxation modulus [23]. It is straightforward to show that holds [23]. With this convolution relation one may decide whether the material behaves as a passive, linear system. We proceed as follows. The relaxation moduli are obtained from the relaxation experiments S1 and S2 as G(t) = σ (t)/(γ − γ (−0) ), where γ is the imposed constant strain and γ (−0) is the strain prior to the strain step. We numerically convolute the measured relaxation moduli with the creep compliances to assess the validity of equation (1). The advantage of the procedure is that we work directly with the measured response functions, instead of ad hoc choosing a fitting function. We convolute G(t) from S1 with J (t) from C1 which corresponds to the small stress regime, and G(t) from S2 with J (t) from C2 for the large stress regime as shown in figure 12. Both convolutions deviate significantly from the linear behaviour at short times. response for times in the range 100-1000 s. Thus, the initially nonlinear behaviour becomes linear at long times. Cyclic loading For a complex living system it is not clear how the probing method itself may affect the response of the system. Cyclic experiments allow us to explore the evolution of the complex system under a given loading condition. Furthermore, constant rate cyclic loading experiments are well suited for investigating the cell responses to different deformation rates and amplitudes independently. Existence of a limit cycle. We subject the cells to repeated loading-unloading cycles at a constant strain rate. As can be seen in figure 13, after the first couple of cycles, the monolayer approaches a limit cycle with very little variation from cycle to cycle. As seen in figure 13, the limit cycle has a higher asymptotic slope than the virgin curve (first cycle). A close comparison between the first and the limit cycle ( figure 13, bottom right) shows that the response is not affected below 2% strain. Beyond 5%, the overall slope is larger for the limit cycle. This effect of continuous cyclic loading on the shape of the stress-strain Top: the input strain-time function. Lower left: stress-strain curves showing that the initial response evolves towards a steady state limit cycle within the first two to three cycles. Lower right: a comparison between the initial, virgin response (gray curve) and the 5th cycle (black curve) for small loading strains. The initial stress value has been subtracted for comparison purpose. Below ∼2%, cyclic loading does not significantly affect the response. loops is a very robust feature of the fibroblast monolayer and the same behaviour is observed at the single cell level [24]. The effects of a cyclic straining are reversible. When the cells are maintained at zero strain for more than 10 min and the experiment repeated, the new virgin curve and the limit cycle are quantitatively similar to the previous ones (data not shown). Figure 14 shows single loading/unloading cycles with different amplitudes. A rest time of 10 min elapses between each cycle, which ensures a sufficient stress relaxation. The stress-strain curves share a common envelope. This indicates that the rest time sufficed to recover the virgin state. The slope dσ/dγ reaches a roughly constant value after the initial ∼10% strain. For small amplitudes the unloading response is very similar to the loading response, essentially a point inversion of the latter. As the amplitude is increased, the unloading response changes its shape noticeably. When unloading, again a linear stress-strain relation is obtained as the strain is lowered from its peak value. Rate dependence. We perform single cycles at different rates, waiting 10 min between each cycle. Changing the rate changes the overall slope of the curves, as can be seen from figure 15. The hysteresis loops are also seen to become wider. Top: the strain rate |γ | is kept constant at 0.1 s −1 and the amplitudes are increased in the sequence 5%, 10%, 20% and 50%. A rest time of 10 min. elapses between each loading to allow for stress relaxation. Bottom: stress as a function of strain for different strain amplitudes. The stress-strain curve becomes linear after the initial loading or unloading. Note also that the initial response during unloading becomes stiffer at large amplitudes. Inset: evolution and recovery of the stress during the first loading, unloading and rest. Effect of the average stress on the viscoelastic moduli In earlier sections, we have performed a detailed characterisation of the cell response to oscillatory and cyclic ramp experiments by varying either the deformation rate or the amplitude. These experiments were performed on cells which were at an initial state where the stress and strain are almost completely relaxed. It is interesting, however, to investigate how the cell rheological properties are modified when the cells are under a nonzero stress or strain condition. For this, we perform the following experiment. Starting from a zero stress, zero strain condition, we apply a fast strain step to the cells. The average strain is then maintained constant while a small amplitude strain oscillation (2% and 5 Hz) is superimposed; γ (t) = γ + γ e iωt . After the strain step, the average cell stress will relax with time as σ (t) = σ (t) + σ (t)e iωt . Measurements are performed when the stress relaxation is slow compared to the frequency of the imposed oscillations. In this way, by applying different strain steps, the moduli can be measured for a large range of stress or strain values. The procedure, though unusual for passive materials, has proved successful to study stiffening responses in biomechanics [25]. The results are shown in figure 16. It can be seen that both moduli, G and G , stiffen as a function of the average stress above a threshold stress. i.e. G = G ( σ ) and G = G ( σ ) above the threshold. In particular, stiffening can be observed during stress relaxation at a constant strain for γ = 100 and 120%. Above the threshold, both moduli follow a power-law with an exponent of about 0.7. Drug experiments After characterising the normal fibroblasts, we now discuss a series of experiments aimed at investigating the role of different cytoskeletal elements like actin filaments and myosin motor proteins in the mechanical responses detailed above. Moreover, we perform, for the first time, experiments demonstrating major qualitative differences between the mechanical responses of an active, living cell to a cell which is made passive, permanently crosslinked. Actin depolymerisation. In order to investigate the contribution of the actin network in the cells to the mechanical properties mentioned above, we treated the cells with an actin filament depolymerising drug latrunculin-A [26]. For this, we first characterised the normal cells by performing step-strain experiments and then introduced the drug at a final concentration of 0.2 µg ml −1 without mechanically perturbing the cells using the method discussed in section 2.5. We observe that a 10 min exposure to the drug at 25 • C induces a marked drop in the stiffness of the monolayer, as shown in figure 17. For a given shear, the stresses are two orders of magnitude smaller, barely resolvable by the rheometer. Comparison of living and 'fixed' cells. Due to the active nature of the cytoskeleton, it is interesting to try and elucidate the contributions from the dynamical factors to the mechanical responses mentioned above for normal cells. With this in mind, we attempted generating purely passive cells (or dead, equilibrium system) by exposing the cells to a fixation agent glutaraldehyde. This process binds the network in such a way that all dynamical processes like filament polymerisation-depolymerisation, kinetics of motors and crosslinking proteins, etc are arrested, though preserving the cytoskeletal structure [27]. As can be seen from figure 18, the typical mechanical behaviour of normal cells is dramatically altered after fixation of the cells by a 10 min. exposure to a 0.1% glutaraldehyde solution. Most notably, the large-amplitude linear behaviour of the limit cycle is completely abolished by the treatment. The response of the passive, fixed cells shows a positive curvature d 2 σ/dγ 2 . Due to this dramatic stiffening the stress at 100% shear increases by an order of magnitude. To further assess the effect of glutaraldehyde on cytoskeletal structure, we compare the numerical derivative dσ/dγ of the stress-strain relation obtained after fixation with previous results on single cells ( figure 18, inset). As a function of stress, the slope dσ/dγ is remarkably similar to the stiffening master-relation described in [8]. This agreement between 'dead' (fixed) and living samples conclusively proves that the stiffening response in living cells reported in [8] is due to the nonlinear elasticity of the cytoskeletal network, independent of biological processes such as e.g. signalling, restructuring, crosslink-dynamics or motor activity. Moreover, since glutaraldehyde fixation does not significantly alter the elastic response, its 'stiffening' effect must actually be to slow down inelastic flow of the cytoskeleton-presumably by preventing detachment of crosslinks. Glutaraldehyde fixation has a similar effect in single fibroblasts under uniaxial elongation [10]. show a marked stiffening response rather than the linear behaviour observed for the cells before treatment (filled symbols). Fixed cells reach stresses about an order of magnitude larger than non-treated ones. Inset: comparison between data obtained from a fixed monolayer and that previously reported for fibroblasts using a single cell stretching technique [8]. The numerical derivative dσ/dγ as a function of stress σ for a fixed monolayer (black line) is compared with the scaled master-relation data from single fibroblasts (grey circles, modified from [8]). Inhibition of myosin-II motors. The glutaraldehyde experiment described in the previous section shows that the cell response is drastically altered when the cells are made passive or dead with fixed crosslinks and filaments. However, it is not clear as to what extent the dynamics of motor molecules are involved in controlling cell mechanical properties. A separate experiment is required to explore this aspect. In order to assess the role of myosin-II motors on cell mechanics, we inhibit them using the specific drug blebbistatin [28,29]. Ramp experiments, like the one discussed in the previous section, do not reveal any qualitative differences compared to normal cells (data not shown). We still observe a linear stress-strain relation after the initial loading or unloading, although the moduli G and G are slightly lower compared to normal cells. Previously reported microrheology experiments have shown a qualitative difference in the frequency response obtained from blebbistatin treated cells [5]. In order to compare our results with this report we performed frequency sweeps using the present technique and obtained the results shown in figure 19. As a control, we perform an amplitude sweep and frequency sweep on the untreated cell monolayer. We then add the drug at a final concentration of 150 micromolar to the cells. After a waiting time of 10 min the amplitude sweep and frequency sweep is repeated. No significant difference is observed in any of the two experiments, aside from a prefactor. To rule out the possibility of the drug not working, we tested its effect on arresting the oscillatory dynamics of freely suspended fibroblasts [30]. Summary of experimental results The CMR technique is a very versatile method for probing the complex rheological properties of cells. Linear viscoelastic properties, nonlinear responses which arise under different loading conditions, temporal variations, and inelastic flow properties are all accessible using this method. In the past, rheological investigations on collections of living cells often addressed cells inside a protein matrix, such as collagen gels [31,32]. Interpretation of the data obtained in this way is difficult, as the extracellular matrix itself has mechanical properties very similar to those of cells. In our case, the external medium is a Newtonian liquid with a negligible viscosity. Another approach are sedimented cell pellets [12]. Our cell monolayer technique has the advantage of a clean geometry where cells are mechanically independent from each other. Therefore, each measurement gives an arithmetic mean over ∼10 6 cells, making the experimental results highly reproducible and easy to perform compared to single cell techniques. Functionalising the plates using adhesion promoting proteins allows the cells to form specific cell-substrate adhesion, at the same time maintaining a simple overall cell geometry when compared to spread cells used in microrheology studies. As we demonstrate, the role of different cytoskeletal components and the comparison between active and passivated cells can also be performed using biochemical techniques, without mechanically perturbing the cell monolayer. Incidentally, the results obtained do depend on the gap between the plates. An optimum gap is chosen for the experiments so that the cells are not too strongly compressed (see appendix). Harmonic oscillation experiments clearly show that a strict linear regime does not exist for the storage modulus G even at the smallest strain amplitude of about 0.2%, at a frequency of 5 Hz ( figure 6). An amplitude range of 0.2-2% may be considered approximately linear. The frequency sweep in this approximately linear regime (figure 7) exhibits a clear power-law increase of both G and G over three decades of frequency (10 −2 -30 Hz). Moreover, the G and G curves remain parallel throughout the frequency range. There is no crossover from an elastic to viscous behaviour. Relaxation, creep and recovery experiments performed by applying step strains and step stresses, respectively, reveal the existence of a continuum of relaxation times in the system (figure 9). The stress relaxation continues even at the longest observation times (10 min). The relaxation spectra obtained at different loading strains are different only by a constant scaling factor ( figure 9). Creep as well as strain recovery experiments (figures 8 and 11) too show long time recovery effects (>20 min). Convolution of relaxation moduli and creep compliances shows that initially nonlinear responses become linear after a certain time, which amounts to 100 s for experiments performed at ∼100% strain ( figure 12). Cyclic loading experiments show an evolution of the initial response towards a steady state 'limit cycle' (figure 13). When a rest time of 10 min is allowed the system recovers the initial 'virgin response'. This can be observed for different loading rates. Varying the strain amplitude reveals a surprising feature of the cell response. The response which is nonlinear at small strains becomes almost perfectly linear as the strain increases ( figure 14). This entry to linearity at large amplitudes is observed for the studied strain rates of 10 −3 -1 s −1 ( figure 15). On reversing the sense of strain rate (unloading) the cells again exhibit an initial nonlinear response and a later linear response. Within the explored range, with increasing strain rate the linear modulus and the hysteresis increase. 23 The viscoelastic moduli as a function of stress show a power-law stiffening response above a threshold stress ( figure 16). Both moduli stiffen as a function of stress with a very similar power-law exponent. The threshold stress values are also similar for both moduli. Drug experiments reveal the following. When actin filaments are depolymerised, the cells are transformed from viscoelastic objects to almost purely viscous ones. When the cell cytoskeleton is permanently crosslinked using a fixation agent, the large amplitude linear response is replaced by a strong stiffening response. Inhibiting myosin motor molecules, on the other hand, does not produce any qualitative change in the rheological behaviour of the cells. Comparison of different results In this section, we compare the different results discussed above in order to reveal some general trends in fibroblast cell mechanics and compare them with the recent literature. Cell response timescales: the frequency scan performed using small amplitude harmonic oscillations (figure 7) is in excellent agreement with the relaxation spectra obtained from the step strain experiments ( figure 9). Both experiments show that there exists a continuum of relaxation times in the system. The relaxation continues to happen even at the longest observation times. Power-law relaxation spectra have been observed for a variety of cell types using microrheology and atomic force microscopy techniques [3,4,33,34]. This, apart from validating the CMR technique, shows that the collective response of 10 6 cells is indeed comparable to single cell responses obtained under different conditions. Recovery spectra recorded at zero stress gives a similar picture for the timescales involved (figure 10), with the strain recovery continuing even at long times (>20 min). Constant strain rate experiments performed at different loading rates show an increase in the slope of the stress-strain relation with increasing rate ( figure 15). This is expected for viscoelastic materials, and can in principle be understood in terms of linear viscoelasticity. Linear and nonlinear regimes: the stress relaxation curves recorded at different strains and recovery recorded at different initial strain values collapse to the same curve on normalisation using the respective initial values. Thus, these responses are independent of the initial loading condition, as expected for a linear regime. In the case of creep experiments, at large stresses a qualitatively different response is observed (see figure 11). However, as the convolution of compliance and relaxation modulus conclusively shows, the cell monolayer asymptotically becomes a linear, passive system for times longer than ∼100 s. This is even more remarkable as the strains involved are of the order of 100%. Another counterintuitive behaviour is observed in cyclic constant-rate loading experiments (figures 14 and 15). The response which is nonlinear at small strains becomes almost perfectly linear at large strains. On reversing the sense of deformation, the response is nonlinear at large strains and becomes almost linear at lower strain values. These effects are more clearly seen at lower strain rates ( figure 15). Thus, the initial response of the cells to large constant rate straining is nonlinear and there is a crossover to linear behaviour as the straining is continued. This observation is very similar to that observed in single cells using the microplate stretching technique [10,24]. Stress stiffening: harmonic oscillation experiments performed on cells under nonzero average stress show a power-law stiffening response for both G and G as a function of the average stress ( figure 16). This strain independent stress-stiffening observed in our monolayer shearing experiment is remarkably similar to that previously observed in single cell stretching experiments [8]. Similar stiffening responses have also been reported for tissues [24]. Unlike the increase in the modulus as a function of rate observed in figure 15, this stiffening is a nonlinear effect. When the cell cytoskeleton is made permanent using a fixing agent, the cell exhibits a stiffening response over a very wide amplitude range replacing the linear stress-strain relation observed in normal cells ( figure 18 (inset)). Correlation between G and G : it is also interesting to compare the correlation in the behaviour of the two moduli G and G in the different experiments. In frequency scans, both moduli increase with frequency with very similar power-law exponents for the entire range of frequencies ( figure 7). The loss tangent remains almost constant as previously reported using other techniques [4,34]. Such a strongly correlated behaviour is even more striking in the stress-stiffening response ( figure 16). Here, the crossover threshold as well as the exponent for the power-law stiffening response are very much comparable for both moduli. However, in experiments probing the frequency response of single cells in suspension using an optical stretcher technique, a crossover from elastic to viscous behaviour is observed as the frequency is reduced [35]. Presumably this reflects the differences in boundary conditions, which should lead to different cytoskeletal structures. Comparison of normal cells and biochemically modified cells: depolymerisation of actin filaments causes the cells to lose their elastic properties almost completely ( figure 17). This transformation shows that the actin network plays an important role in defining the mechanical properties of these cells. Large amplitude, constant rate loading experiments performed on normal living cells and cells with fixed cytoskeleton (permanent crosslink and filament structure) produce completely different cell responses ( figure 18). The linear regime which is clearly observed in normal cells for a wide range of loading rates is replaced by a stiffening response in fixed cells. This experiment conclusively proves that dynamic crosslinks, or filaments are necessary for the large amplitude responses exhibited by normal cells. Motor proteins, which also form a class of dynamic crosslinks with the ability to generate active forces and relative motion between filaments, do not appear to play a prominent role in the studied responses as discussed in the text. A comparison of the frequency sweeps on normal cells and that performed on cells with inhibited myosin motors shows only a slight reduction in both G as well as G upon drug treatment, while retaining the qualitative features of the normal cell response ( figure 19). This result, which we obtain by shearing a cell monolayer, is qualitatively different from microrheology experiments performed using optical tweezers on single cells [5]. In the latter case, the loss modulus G becomes independent of frequency when myosin motors are inhibited. Conclusions and speculations Taken together, our results lead to the following picture of cell mechanics. The actin cytoskeleton defines cell mechanical behaviour via an interplay between nonlinear elastic behaviour and linear inelastic behaviour. At short timescales, crosslinks stick and the network responds elastically. Due to its nonlinear elastic properties, it stiffens at a large average stress. This stiffening response is by now well established as a general feature of biopolymer networks [3,25,36], and can even be observed in vitro in crosslinked actin networks [37]. Proposed explanations range from entropic stretching [37,38] to enthalpic bending [8,39,40]. If the stress is further increased, the extent of linear inelastic flow increases dramatically in a non-Newtonian fashion. The microscopic mechanism for this inelastic response is most likely crosslink slippage, but may also involve filament growth. The inelastic flow regime goes hand in hand with a remarkably linear stress-strain relation. Such a behaviour is by no means unique to biological cells: under the name of kinematic hardening [41,42], it is commonly observed in composite alloys [43,44], as well as in rubbery polymers [45] and granular materials [46]. Microscopic understanding of this hardening response may well be a crucial step for the further development of our knowledge of cell mechanics. Living cells may respond to the sudden mechanical stimulus by undergoing a transient reoganisation of its internal structure. Subsequently, since the input is kept constant after the initial stimulus, the system could evolve towards a 'steady state organization' with time. This may explain why a strict linear regime is elusive in the amplitude sweeps (even at about 0.1% strain), where the cell is under continuous mechanical perturbation. Bottom: at 4 µm, the monolayer appears very stiff. Even though the data is noisy, an increase in stress can be observed after about 100 s. At larger gaps the stresses are about one order of magnitude lower and the data is much cleaner. way. The stress is seen to increase about 100 s after applying the step strain. At 4 µm, the cells are strongly compressed and the nucleus too is expected to be under compression.
2019-04-18T13:07:49.701Z
2007-11-01T00:00:00.000
{ "year": 2007, "sha1": "373eaff5b0f3d8f6a2abf62f6229ffe3dd37e565", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/9/11/419", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "edbb0d4e844b089ebb53e2635614f1ab3e294357", "s2fieldsofstudy": [ "Engineering", "Biology", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
21190526
pes2o/s2orc
v3-fos-license
Social and Policy Aspects of Climate Change Adaptation in Urban Forests of Belgrade I S S N 1 8 4 7-6 4 8 1 e I S S N 1 8 4 9-0 8 9 1 Original scientific paper Background and Purpose: Climate change has an impact on economic and natural systems as well as human health. These impacts are particularly visible in urbanised areas. Urban forests, which are one of the main natural features of the cities, are threatened by climate change. Generally, the role of forests in combating climate change is widely recognised and its significance is recognised also in urban areas. However, appropriate responses to climate change are usually lacking in their management. Climate change adaptation in relation to urban forests has been studied less often in comparison to climate change mitigation. Adaptive capacity of forests to climate change consists of adaptive capacity of forests as an ecological system and adaptive capacity of related socio-economic factors. The latter determines the capacity of a system and its actors to implement planned actions. This paper studies social and policy aspects of adaptation processes in urban forests of Belgrade. Materials and Methods: For the purpose of this study content analysis of urban forest policy and management documents was applied. Furthermore, in-depth interviews with urban forest managers and Q-methodology surveys with urban forestry stakeholders were conducted. Triangulation of these data is used to assure validity of results. Results: The results show weak integration of climate change issues in urban forest policy and management documents, as well as weak responses by managers. A comprehensive and systematic approach to this challenge does not exist. Three perspectives towards climate change are distinguished: (I) ‘sceptics’ do not perceive climate change as a challenge, (II) ‘general-awareness perspective’ aware of climate change issues but without concrete concerns toward urban forests, (III) ‘managementoriented perspective’ highlights specific challenges related to urban forest management. Awareness of urban forest managers and stakeholders towards climate change adaptation is characterized by assumptions and uncertainties, which are the result of poor knowledge, lack of data of local impacts and weak communication. Abstract 1 The European Forest Institute Central-East and South-East European Regional Office (EFICEEC-EFISEE), c/o University of Natural Resources and Life Sciences, Vienna, Feistmantelstr. 4, A-1180 Vienna, Austria 2 University of Belgrade, Faculty of Forestry, Department of Landscape Architecture and Horticulture, Kneza Višeslava 1, RS-11000 Belgrade, Serbia INTRODUCTION In the last century the global urban population has increased rapidly from 746 million in 1950 to over 3.9 billion in 2014 [1].In Europe, 73% of the population lives in urban areas [1].Population growth, together with technological development and increased consumption levels, has increased the pressure on urban centres, its natural resources and ecological systems [2].Climate change is a major challenge today's society needs to cope with, especially as it has been proven that human activities have been the dominant cause of it [3].The Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) has confirmed the important role of cities in the development and delivery of climate change responses, as cities are in many ways affected by climate change and are a focal point of vulnerability, as their functioning relies on complex infrastructures [3,4].Urban forests are part of a green infrastructure which is one element that can contribute to climate change adaptation in cities [5].However, the role of cities [3] and its green infrastructure is still marginally studied in relation to variety of issues related to climate change.So far mitigation actions have been supported by a wide range of policies in various sectors [6], while adaptation has only become prominent lately. 'Urban forests' in this paper implies all forests and other tree-based green areas (e.g.parks, tree alleys) that are situated within the administrative border of a city [7,8].Urban forests contribute to mitigating climate change in many ways: by controlling greenhouse gases (GHGs) emissions, the shading effect on buildings (reduces energy use and carbon emissions), by regulating the urban microclimate (reducing albedo, providing shade and cover) and the hydrological regime of cities [9][10][11].At the same time, urban forests are becoming highly threatened by climate change [4].Assessing the vulnerability of urban forestry systems is essential for adaptation processes and their long-term sustainable development [5].Adaptation mainly is a matter of local importance [12,13] and promotes the implementation of measures which are useful in the present, and at the same time reduce the risk of unacceptable losses in the future [13,14].FAO [15] sees adaptive forest management as an essential for addressing arising challenges and reducing forest vulnerability.In adaptive management various measures can be included: selection of drought-tolerant or pestresistant species, use of stock from a range of provenances, assisting natural regeneration of functional species, and measures targeted to individual requirements of single species [15][16][17].All of these measures need to be adapted to site specific forest conditions [15]. Adaptive capacity of urban forests comprises adaptive capacity of forests as ecological systems and adaptive capacity of socio-economic factors of urban forestry.Adaptive capacity of socio-economic factors determines the capability of systems and its actors to implement planned actions."Adaptive organizations that incorporate organizational learning enhance social capital through internal and external linkages, partnerships, and networks, and make room for innovation and multi-directional information flow" are needed nowadays [7, p6].Effective urban forest management with Conclusions: The results indicate the need for building urban forestry institutional and human capacities for creating effective climate change adaptation responses, which will lead to better understanding of challenges posed by climate change and ability to make the trade-offs between possible decisions.Keywords: awareness, urban forest management, climate communication, adaptation measures, institutional and human capacity, Serbia regard to climate change must be responsive to a wide variety of economic, social, political and environmental circumstances [13].Thus, effective communication on climate change is very important [18].Developing a dialogue within and out of the urban forest management community is essential, and will increase the range of possible actions [13].It is recognised that climate change ultimately requires a national response, and that much more attention must be given to how decisions are made [19] and how decision-makers value expected risks and benefits [20].Planning for climate adaptation requires comparison of decision options, and these should be based on relevant scientific results which are effectively communicated and perceived [20].Perception is recognised as an active process of understanding, through which people construct their own version of reality [19] and therefore influences decisions. Belgrade is the capital of Serbia and has faced enlargement and an intensive urbanisation process in the last decades, mainly at the expense of green areas [21].In the last 50 years, an increase in mean annual temperature has been observed in all parts of Belgrade (up to 0.04 °C/yr), as well as for precipitation (up to 1.7 mm/yr) [22], which demonstrate a demand for climate change adaptation strategies in all sectors (including urban forestry) in Belgrade [23].The protection of existing and the planning of new urban forests, as well as creation of responses to climate change, is identified as need by city administration [24]. This paper focuses primarily on social and policy aspects of adaptation processes in urban forestry in Belgrade.By applying mixed methods research, it aims to understand: (i) current climate change adaptation practices in urban forest management of Belgrade, and (ii) perceptions of various urban forestry stakeholders toward the issue.Following research questions are addressed: METHODS In this research following methods were applied: content analysis of relevant policy and management documents, in-depth interviews and Q-methodology surveys.Triangulation of the data is used to assure the validity of results, and to control possible weaknesses and biases.For the purpose of this research a case study approach has been implemented [25], with urban forests of the city of Belgrade as a selected case. Analysis of Documents Urban forest related policy and management documents (Table 1) are analysed by content analysis [26].Searched aspects were: (i) the contribution of urban forests to mitigating/adapting to climate change; (ii) the vulnerability of urban forests to climate change; (iii) climate change impacts on urban forests and (iv) climate change adaptation measures considered as important for urban forests management.It is analysed whether these aspects appear directly or indirectly in the documents and how detailed they are presented. In-Depth Interviews In-depth interviews were conducted with six urban forestry managers of two main management bodies in Belgrade.Two interviews were conducted with the heads of the management units and other four interviewees were chosen with the snowball technique.The selection criterion was that all were in charge of developing management plans for forests in Belgrade.The interviews were conducted in May 2012, with an average length of 45 minutes.All interviews started with the question whether the managers were adaptation of business entities in the energy, industry, transport, agriculture and forestry, utility and housing business policy of climate protection and compliance with international agreements; developing an action plan for adaptation to the climate change of all economic sectors; the design, development and implementation of an adequate health response system on the effects of global climate change [41] Development "reducing the impact of tourist transport on climate change and pollution" [43] and …"protection and valorisation of natural and cultural heritage…", where "…impacts of climate change on natural heritage, and the lack of adequate resources (human and financial) for the protection and preservation of heritage" are identified [43] Quote related to climate change and green areas: "Climate change imposes an obligation to improve the microclimate conditions, by conservation of the existing and establishment of new green infrastructure (alleys and green areas) along all pedestrian paths and cycling routs, wherever possible in the existing urban setting.This problem is almost completely neglected in the past two decades of development" [43] TABLE 1. Aspects related to climate change from analysed urban forest management and policy documents -countinuation + directly addressed; +/-indirectly addressed; -missing confronted with climate change in their work, and how important this issue was to them.The next question raised issues of communication and policy-making and led into specific details of climate change adaptation of urban forests. In the end, main challenges for adaptation processes in urban forests were stressed. Q-Methodology The aim of the Q-methodology is to analyse subjectivity in a statistically interpretable form [27].In this research, the Q-methodology surveys were used to extract stakeholders' individual perceptions [28] of climate change adaptation in urban forests of Belgrade, and to differentiate which aspects of adaptation processes are seen as the most important. They were addressed to a variety of urban forestry stakeholders (urban forest managers, employees in ministries, research organisations, NGOs, etc.), including those targeted in the in-depth interviews.In total 23 respondents from 14 organizations were interviewed (five at local, eight at national and two at regional level).20 of the Q-surveys were conducted face to face in June 2012, with an average length of 50 minutes.Three additional Q-surveys were completed through an on-line application of the Q-methodology by using Q-Assessor. The application of the Q-methodology implied formulating statements about climate change adaptation in urban forests based on the in-depth interviews and literature review.After the test phase, a concourse of 48 statements was created.The Q-surveys consisted of respondents sorting 48 statements, based on their subjective point of view, along a scale of +4 (strongly agree) to -4 (strongly disagree) using provided score sheets.The results of these 23 Q-sorts were then analysed using PQMethod 2.33 factor analysis software (available at: http://schmolck.userweb.mwn.de/qmethod/).As a result shared perspectives are identified and described.Each Q-survey was complemented by a brief followup interview, revealing why respondents have agreed/disagreed the most with certain statements [29]. Research Area: Urban Forests of the City of Belgrade Belgrade is the biggest city in the Republic of Serbia by area (3,222 km 2 ) [30] and population (1.66 million) [31].In the period from 1948-2002 the total population of Belgrade has increased by 2.5 times [30,32], which was followed by significant enlargement of the city [18].Urbanization has had a major impact on the green areas, many forests had to be cutdown and very little has been done to prevent this situation [33,34]. Belgrade has in total 35,980.00ha of forests in the administrative area.The Public Enterprise (PE) 'Serbia Forests' manages 32,322.70ha of forests, while the Public Utility Company (PUC) 'Greenery Belgrade' manages 610.75 ha of forests and 2,900 ha of other green areas.These two management organizations are the most important at the city level.Other forests are managed by other organizations (water management companies, military, agricultural organizations, churches) according to 10-year management plan approved by the Ministry [33].Urban forests of Belgrade are mostly small in size, fragmented and scattered [24].Deciduous tree species prevail (96.2%) [33].General assessment of the forests in Belgrade shows unfavorable conditions of forests, and the main management goals identified are the conversion of coppice forests into high forests, timely and adequate maintenance of artificially established stands, increasing the share of autochthonous species, and responding to upcoming challenges (e.g.climate change) [24]. Background Information on Climate Change Policy in Serbia The assessment of climate change for Serbia by a regional climate model shows that annual temperature is expected to rise from 0.8-1.1°C(according to A1B scenario) to 3.4-3.8°C(A2 scenario) per decade [23].Precipitation is projected to decrease by 1% each decade, which will be followed by a decrease in the number of days with snow cover [35]. In 2001 Serbia became Party to the United Nations Framework Convention on Climate Change (UNFCCC), and in 2008 it ratified the Kyoto Protocol [23], thus focusing mainly on mitigation activities.So far, no climate change adaptation strategy has been developed at any level. The Ministry of Agriculture and Environmental Protection is a national coordination body for the realisation of the UNFCCC convention.In collaboration with other ministries and governmental bodies (e.g.Republic Hydrometeorological Service, EU Integration Office), Serbia formed a working group for fulfilling obligations ratified by the UNFCCC.The Initial National Communication (INC) to UNFCCC represents one output of this working group and is the first state-of-the-art report in the field of climate change at national level [23]. The development of the INC indicated several obstacles for the effective identification and implementation of climate change adaptation measures.The main problems identified were: (i) a lack of systematic data collection and databases, (ii) a deficient structure of the sector and (iii) a lack of financial and technological capacities.The main goal of the state therefore is to build new and strengthen existing capacities of experts who are involved in (sectoral) policymaking in relation to climate change and the development of the National Action Plan for Adaptation [23]. Climate Change Adaptation Aspects in Urban Forest Policy Documents Urban forest management in Belgrade is influenced by various national and local policy documents.Content analysis of these policy documents demonstrated week integration of climate change aspects.Climate change mitigation aspects are more prominent compared to adaptation (Table 1). Of all analysed documents, the Spatial Plan of the Republic of Serbia (2010) is the most advanced in terms of the integration of climate change issues.A specific chapter focuses on climate change effects in various sectors (e.g.forestry, nature protection) and identifies main problems: - [36].According to the latest Spatial Plan (2010), existing lower level urban planning documents (e.g.Regional Spatial Plan for the Administrative Territory of the City of Belgrade [37] and Master Plan of Belgrade [38]) still require adjustments related to climate change. The analysed forestry-related policy documents (e.g.Law on Forests [39], Forest Development Strategy [40] and Afforestation Strategy of Belgrade [24]) have generally been harmonised with various international regulations, including climate change regulations. However, content analysis revealed that climate change issues are weakly integrated and mainly appear as general and indirect statements throughout the documents.The Afforestation Strategy of Belgrade (2011) has been the most advanced, primarily by integrating climate change mitigation aspects [24], while the Forest Development Strategy (2006) only briefly introduces these aspects. Other documents (National Sustainable Development Strategy [41], Development Strategy of Belgrade [42] and Tourism Development Strategy of Belgrade [43]) recognise climate change as a future challenge and call for development of thoughtful approaches. The National Sustainable Development Strategy identifies the main problems in this regard (Table 1). Climate Change Adaptation Aspects in Urban Forest Management Plans Four analysed urban forest management plans (UFMPs) were developed for different forest areas (municipalities) and urban forest types (urban and peri-urban forests), managed by PE "Serbia Forests" or PUC "Greenery Belgrade".In all four UFMPs climate change mitigation and adaptation aspects were not directly covered and related terms were not used 1 .Implications could only be found in the description of general aims of forest management, such as "forests have an important role in improving climatic conditions" or "forests have positive impacts on the environment".Parts of the UFMPs describing climate conditions in Belgrade are abundant with information of all climate parameters (e.g. annual average air temperature, minimal/ maximal annual temperature/precipitation), but future impacts of climate change are not mentioned (Table 1). Urban Forest Stakeholders' Perception towards Climate Change Adaptation The results obtained by the in-depth interviews and Q-methodology offer insights into the current state of urban forest management and policy regarding climate change.We therefore interlink the findings from both sources of information, as they complement and explain each other (detailed findings from each method are presented in Table 2 and 3). The application of the Q-methodology in this study revealed three shared perspectives regarding climate change adaptation in urban forests, which are named: 'sceptics', 'management-oriented perspective' and 'general-awareness perspective' [29]. 'Sceptics' do no perceive climate change as a challenge.They hold the opinion that climate variations are normal and that there is a lack of data and evidence on existing change at the local level.This perspective reveals a very low level of awareness and communication regarding climate change, both inside and between various urban forestry organisations.Moreover, sceptics are of an opinion that urban forests will naturally adjust to future climate variability.They perceive other problems as more important (e.g. economic crises, governance issues, lack of information and technical assistance).However, it can be said that this perception is not so rigid, as more scientific evidence regarding climate change impact and information would be needed for this group to change opinion (follow-up interviews). The two other perspectives are aware of challenges posed by climate change, and both selected statements regarding importance of education, public awareness, individual and collective actions in tackling climate change as important.However, they are also revealing different standpoints. The 'management-oriented perspective' is aware of concrete needs related to improvement of urban forest management in the light of climate change (e.g.introduction of monitoring and modeling tools, obtaining more funds for research, improving legislation).The in-depth interviews revealed that urban forests vulnerability to climate change was noticed in the last ten years in practice (Table 3), but was not addressed in management plans.Vulnerabilities are seen through: (i) lower physiological state of trees due to frequent droughts/water stress, (ii) more frequent weather accidents, (iii) changes Generalawareness perspective Normalized factor scores 1 Introducing monitoring system and modelling tools for forest management will be of great importance for adaptation to climate change.The protection of biodiversity and forest habitats will depend on how well we adapt forests to climate change. - We should aim at planting as many species as possible in order to make forests resilient to climate change. - -2 0 0 31 As effect of climate change we have more trees that are drying now in urban forests. -1 1 1 32 Climate variation is normal, so we cannot say that there is global climate change. -1 -3 33 In future invasive species will become a big problem due to climate change. - Adapting forests to climate change should be done because of sustainable development of the city. -2 -3 35 There is not enough information to say definitely that climate change exists. --36 Climate change adaptation in urban forests will not help in regulation of city microclimate.in forest structure, (iv) changes in forest increment.In social terms, vulnerability is seen through higher use and changed demands toward forests, while in economic terms vulnerability is expected by increased costs of maintenance and introduction of measures related to climate change.The concept of green infrastructure (e.g.forests, parks, green corridors) is identified as important regarding the climate change by some managers, who are trying to introduce this concept into city planning and thus secure higher visibility and importance of urban forests from other sectors (in-depth interview).The 'general-awareness perspective' values statements which highlight general challenges to climate change as the most important, such as the need for more scientific evidence, better education, more funds for conducting research and improved cross-sectoral cooperation. One of the main weaknesses, which was stressed in both in-depth and Q-surveys, is the low level of communication and coordination between urban forestry actors.National level organizations responsible for climate change issues and agencies involved in urban forest management (mainly at local level) do not cooperate.Managers stressed that climate change was not set as an important issue at the management level, that possible existing data and findings are not shared and used in management and that communication around the issue is a matter of individual interest (Table 3). According to 'general-awareness perspective', climate change adaptation policy for urban forests should be top-down mandated by leading national bodies.However, 'Management-oriented perspective' perceives that management bodies, due to their practical knowledge and experience, should be involved in this process as well. DISCUSSION Climate change is a serious challenge for future urban forest management in Belgrade.For the last 50 years climatic changes have already been recognised [22].Forest resources in Belgrade are facing similar problems as other forest resources in the temperate continental zone due to the climate change [17].The notion of local risks and negative influences of climate change have been recognized in the everyday practice of forest managers, but are not analysed and tackled in future management plans.This means that many of − noticed in the last ten years in Belgrade, but not analysed adequately − database of resulting changes does not exist − in ecological terms the vulnerability of forests is seen through negative effects of drought periods on various tree species, water stress, worse physiological state of trees and more frequent natural disasters − management and maintenance operations have been changed (more frequent irrigation and mowing are needed, planting is done in autumn) − the structure of forests has been changed (coniferous species have been replaced with deciduous; Fagus silvatica L. heavily impacted, invasive tree species have become more frequent in urban areas) − change in forest increment due to long dry periods is noticed − all the changes in management and maintenance operations have been made as a consequence of already experienced negative influences (reactive adaptation measures) Monitoring of climate change impact on urban forests − monitoring of climate change impact on urban forests has not been done − the necessity for comprehensive monitoring practices was expressed by all managers [39,40], which provide basis for further improvements and modifications towards the integration of climate change adaptation aspects in urban forest management.The Afforestation Strategy of Belgrade is one example where climate change becomes prominent [24].Also the significance of urban planning documents for urban forest management has been emphasised.However, frequent changes of the government and legislation in Serbia prevent the adequate implementation of existing ordinances into urban forest management.Such an instable system of passing legislation is directly connected to limited reactions of lower level governments and management.Harmonization of legally binding planning and management documents is necessary for an appropriate planning of activities. Current urban forest policy and management in Belgrade is traditionally topdown, dominated by decisions made by the national body and characterized by low levels of communication among actors.Hence, there is need for better communication between actors, as well organization of training sessions and outreach activities for forest professionals [45,46].Furthermore, the coordination of activities and interaction of various stakeholders at different levels and from different sectors is needed [7,47].Stakeholder's awareness of potential risks needs to be raised to set up conditions for well-informed and timely actions.[45].Bottom-up initiatives by local actors (e.g.managers) addressing specific local risks to climate change could be valuable.Moreover, interactive discussions on measures [46] and the involvement of various experts (e.g.climate experts, decision scientists, social and communications specialists) are important as this might lead to better communication and agreement on selected measures and evaluation of trade-offs [20]. Even though many urban forestry stakeholders recognize the importance of climate change, their actual response can be characterized as low and passive.This indeed represents one of the major challenges for climate change adaptation.The presence of sceptics among employees in forestry regarding this issue proves that climate change awareness is still not as high as needed.Hence there is an urgent need in Serbia to raise awareness among experts and improve capacities that are needed for adequate responses, as suggested in other European studies [45,46].Empowering decision-makers and citizens is an important step, and can be done through formal education programs but also public service announcements [18]. CONCLUSION This study gives broad overview of current situation related to climate change adaptation in urban forest management and policy, thus it represents the first analysis on this topic in Serbia.It can serve as a basis for more detailed quantitative and qualitative analysis of specific urban forests and problems imposed by climate change, in both ecological and socio-economic terms, as a result of which more practiceoriented recommendation could be drafted. At the moment, the integration of climate change adaptation measures in urban forest management in Belgrade is a big challenge dependent on decisions of distinct actors who hold different perceptions.These distinctions of opinions indicate existence of complex urban forestry system, where various needs should be harmonized in order to overcome existing and forthcoming challenges.Due to this complexity, adaptive forest management is seen as an adequate approach for urban forest management under climate change.Traditional urban forest management with a narrow sector-specific focus, dominated by decisions of few actors, cannot meet the increasing challenges that urban forests face nowadays.In practical terms, adapting urban forests to climate change should aim at reducing their vulnerability to undesirable effects while preserving a full range of ecosystem services.This mainly involves the reduction of urban forest exposure to risk and increased urban forest resilience to disturbances.Adapting socio-economic aspects of urban forestry system are thus necessary, which assume involving various stakeholders and establishing coordination and interaction at all levels, as well as developing necessary policy and management plans and programmes.Furthermore, urban forestry stakeholder's awareness and knowledge of risks imposed by climate change are necessary prerequisite in order to implement adaptation measures.Strengthening research, communication and fostering discussion around climate change, as well as building a stronger network of urban forestry actors, both at the local and national level, are therefore urgently required. ) One of the key challenges for sustainable tourism, are: 1 4 2 2 The establishment and development of a dialogue about climate change adaptation among various actors is highly important for management of urban forests.adaptation of urban forests to climate change would help in the adaptation process. 4 1 4 4 Public institutional money is needed to deal with the adaptation of urban forests to climate change. 42 local adaptations are not important part of national and international policy agenda of climate change.effects of climate change become evident, it will be acted in finding a resolution.When schools/universities include climate change in their curriculum, young generations will know what to do in the end. 1 2 3 43 Climate change adaptation in urban forests is an urgent issue that requires an immediate change of forest management.adaptation policy for urban forests should be top-down mandated.1 2 3 Policy and legislation regarding climate change issues − climate change aspects mainly indirectly covered − managers' knowledge about existing national climate change regulatory framework is based only on individual interest − the lack of legislation is one of the biggest constraints for the integration of climate change adaptation measures in management Communication about climate change − the level of communication is very low (inside and between various organizations) is happening on individual level − need for information on financial sources and other opportunities that exist at the national level for tackling climate change − internet and grey literature are mostly used as the source of information Urban forest contribution to climate change in FMPs − not specifically described in UFMPs (it is seen as one of the main forest functions) − terminology related to climate change is hardly used − the long-term monitoring data regarding the contribution of urban forest to climate change are missing Vulnerability of urban forests to climate change TABLE 1 . Aspects related to climate change from analysed urban forest management and policy documents Name of document Year of passing Level Mitigation aspects Adaptation aspects Type of urban forest mentioned Quotes from documents in relation to mitigation or adaptation to climate change The main problems that are identified are: "absence of a national inventory of GHGs emissions; the lack of strategic documents related to climate change; inconsistent legislation, relating to emissions, with the regulations of EU".Therefore, the main aims are:harmonization of national regulations in the field of climate change and ozone depletion with EU regulations;adaptation of existing institutions in relation to the needs for active implementation of climate protection policy and fulfilment of obligations under international agreements; TABLE 2 . [29] of statements used in Q-methodology with normalized factor scores for each statement and perspective[29] TABLE 2 . [29] of statements used in Q-methodology with normalized factor scores for each statement and perspective[29]-continuation TABLE 2 . [29] of statements used in Q-methodology with normalized factor scores for each statement and perspective[29]-continuation uncertainty exists due to the lack the data on the local impact − seen as less important challenge than other challenges (e.g.land use conflicts, governance issues, lack of information and technical assistance) − becomes more important because of perceived changes in forest resources, changes in maintenance operation, or in management practices − mitigation measures (e.g.afforestation) have been better understood TABLE 3 . Summary of main aspects revealed through in-depth interviews TABLE 3 . Summary of main aspects revealed through in-depth interviews -continuation
2017-05-02T21:57:14.101Z
2015-10-15T00:00:00.000
{ "year": 2015, "sha1": "768f831e0e666d25c74309a0c9c37f50007668ba", "oa_license": "CCBY", "oa_url": "https://www.seefor.eu/images/arhiva/vol6_no2/zivojinovic/zivojinovic.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "768f831e0e666d25c74309a0c9c37f50007668ba", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
262300868
pes2o/s2orc
v3-fos-license
Mass social media-induced illness presenting with Tourette-like behavior Currently, we are facing a new manifestation of functional neurological disorder presenting with functional Tourette-like behavior (FTB). This study aimed to show characteristics of this phenotype presenting as an outbreak of “mass social media-induced illness” (MSMI) and to explore predisposing factors. Between 5–9/2021, we prospectively investigated 32 patients (mean/median age: 20.1/18 years, range: 11–53 years, n = 16 females) with MSMI-FTB using a neuro-psychiatric examination, a comprehensive semi-structured interview and aspects of the Operationalized Psychodynamic Diagnostic System. In contrast to tics, numbers of complex movements and vocalizations were nine times greater than of “simple” symptoms, and of vocalizations one and a half times greater than of movements. In line with our hypothesis of MSMI, symptoms largely overlapped with those presented by German YouTuber Jan Zimmermann justifying his role as “virtual” index case in current outbreak. Typically, symptoms started abruptly at a mean age of 19 years and deteriorated gradually with no differences between males and females. In all patients, we identified timely-related psychological stressors, unconscious intrapsychic conflicts, and/or structural deficits. Nearly all patients (94%) suffered from further psychiatric symptoms including abnormalities in social behavior (81%), obsessive-compulsive behavior (OCB) (47%), Tourette syndrome (TS) (47%), anxiety (41%), and depression (31%), about half (47%) had experienced bullying, and 75% suffered from coexisting somatic diseases. Our data suggest that pre-existing abnormalities in social behavior and psychiatric symptoms (OCB, anxiety, and depression), but also TS in combination with timely-related psychological stressors, unconscious intrapsychic conflicts, and structural deficits predispose to contagion with MSMI-FTB. Introduction Only recently, our group suggested the concept of a new type of mass sociogenic illness (MSI) spread solely via social media (1). So far, it was believed that outbreaks of MSI in any case require direct face-to-face contact among persons affected (2). In Germany, we now identified an outbreak of functional Tourette-like behavior (FTB) . Several authors speculated that current increase in FTB is related to the COVID-19 pandemic, increased social media use, and/or "tic-like" presentations on social media (1,(5)(6)(7)(8)(9)(10)(11)(12). The latter earns enormous attention among young people with millions of subscribers and followers of the most popular influencers such as Jan Zimmermann with his German YouTube channel "Gewitter im Kopf " ("thunderstorm in the brain") (15) , the English speaking Evie Meg under her TikTok name "thistrippyhippie" (16,17) , , and the Danish speaking Stine Sara (18) Interestingly, not only patients affected, but also these influencers show a remarkable symptom overlap. As outlined recently (1), in Germany we suggested that the YouTuber Jan Zimmermann acts as a "virtual" index case of an outbreak of MSMI-FTB. In that regard, the following aspects are worth mentioning: in February 2019, the 23-yearold launched the channel "Gewitter im Kopf ". Claiming to inform about TS, his channel reached 1 million subscribers in < 3 months and currently counts 2.21 million subscribers with 315,826,001 views of so far 336 videos released (status 20 th Dec 2021) (1,19) In his videos (20-27) -, he states that he experienced first mild tics before entering elementary school, but by the age of 18, symptoms changed and vocalizations occurred such as whistling and pronunciation of the German words "Lavendel" ("lavender"), "Zuckerwatte" ("cotton candy"), and "Salami" ("salami") followed by first-ever swear words and whole sentences, pronounced with changed voice with low or high pitch. Stress, presence of police, and people with TS would result in an increase of symptoms. However, most intensive symptoms with most aggressive content would be caused by the presence of his mother. On the contrary, symptoms would completely recede during sleep, intimacy, and sexual intercourse, and when speaking English. After having been diagnosed with TS at age 18, several treatments would have been initiated without improvement. Because of these newly developed symptoms, he stopped his training as a physical therapist and for his driver's license, and as passenger started to sit in the back seat with the child safety lock on. At the same time, he stated that he and his mother found his symptoms "funny" making the family laugh, and his mother took notes of his "funniest phrases". Here, we present for the first time detailed clinical data of the largest sample of patients with MSMI-FTB from a single German center described so far. We focussed on the presence of psychological stressors, since they-although removed from diagnostic criteria in DSM-5-are still considered important risk factors for the development of FND reflecting the "conversion" of underlying emotional distress into physical (neurologic) symptoms (28). Methods Between 5/2019 and 9/2021, 44 patients attending our Tourette outpatient clinic were diagnosed with FND presenting with Tourette-like symptoms after exposure to the aforementioned social media content. Of these, 32 patients agreed to participate in our study. Firstly, in all patients a thorough neuro-psychiatric examination was carried out by one of the authors (KMV), who is a psychiatrist, neurologist, and TS-expert making the diagnosis of FND and, in addition, confirming or excluding the diagnosis of a concurrent primary tic disorder such as provisional tic disorder or TS according to DSM-5 criteria. Secondly, using an explorative approach by composing a comprehensive semi-structured clinical and psychological interview similar to the one used by Paulus et al. (11), we were able to collect demographic and biographical data, looked for sex differences, and took a detailed history of newly developed symptoms with respect to age and kind of onset, course, triggering factors, suppressibility, distractibility, premonitory sensations, influencing factors, treatment, acceptance of diagnosis of FTB, and coincidence with the pandemic. With the interview composed by the authors and conducted by a psychologist and psychodynamic psychotherapist with extensive experience in TS (CF), we were also able to evaluate underlying psychological mechanisms, triggering and maintaining factors, and preexisting psychopathology. Depending on patients' age and developmental level, which referred to patient's ability to report in detail, interviews were performed together with the parents. To classify functional movements and vocalizations as "simple" or "complex", we followed the structure of the Yale Global Tic Severity Scale (YGTSS) (29). All current and recent movements, vocalizations, and abnormal behaviors reported by patients and parents, and observed during the evaluation were documented. We evaluated use of social media and in particular of the YouTube channel "Gewitter im Kopf " (if parents agreed, without their supervision assuming that this leads to more truthful answers). Finally, we explored patients with regard to possible underlying emotional distress (e.g., traumatic/stressful life events and family dynamics). Based on the Operationalized Psychodynamic Diagnostic System (30), a widely used and well-established instrument in psychodynamic psychotherapy that aims to explore underlying psychopathology, we evaluated patients intrapsychic conflicts such as conflicting unconscious needs and desires (e.g., dependency and autonomy) and structural deficits, which refer to overall maturity of mental functions and possible lack of certain basic mental abilities (e.g., affect tolerance or regulation of self-esteem), which usually help managing psychological stressors (31). Statistical analyses Data were mainly analyzed descriptively. Frequencies, means, medians, and measures of distribution were calculated using Excel (version Microsoft Office Professional Plus 2016). Analyses for group comparisons were performed in R (version 4.1.1). Within group comparisons, t-tests were performed for interval-scaled dependent variables and Fisher's exact tests for dichotomous dependent variables. P-values of significant results for group comparisons were conservatively corrected for multiple comparisons according to Bonferroni (p c ). In addition, Cohen's d and bias-corrected Cramer's V were calculated as measures of effect size for significant results. Detailed text obtained from interviews with respect to psychological and maintaining factors was handled similar to the process of inductive content analysis (32). Accordingly, patients' answers were summarized to categories and, if appropriate, to superordinate categories. Representative answers are provided. In none of the patients, the correct diagnosis of MSMI-FND had been made before presentation in our clinic. Instead, in all but two patients (n = 30, 93.8%), FND was misdiagnosed as tics/TS. Accordingly, 20 patients (62.5%) had received anti-tic medication (on average, 2.4 different drugs for a mean duration of 4.8 months; for details see Supplementary Table 1). However, 15 patients (46.9%) in addition fulfilled diagnostic criteria for TS. In 10 (31.3%) of these, correct diagnosis of TS had been made before the onset of FND, while in four patients (12.5%), additional diagnosis of TS was made during presentation in our clinic (missing data: n = 1, 3.1%). In all cases, however, FTB represented the predominant symptom. "Animal sounds" were identified in 11 patients (34.4%). Because of their high frequency and relative complexity, we classified them different from YGTSS as a separate category. The spectrum of "complex" FVS ranged from single words to 10-word sentences with neutral, socially inappropriate, and offensive content, mainly in German, but partly also in English language. The two by far most often used "complex" FVS were the German swear words "Arschloch" ("asshole") and "Fick dich" ("fuck you"). Since "complex" FVS showed a remarkable overlap, we clustered them depending on the topic as follows (in descending order): "fick/fuck", other swear words, radical right-wing statements, food-related words, and statements expressing disagreement (for details see Table 2). Premonitory sensations and suppressibility Twenty-seven patients (84.4%) reported experiencing a premonitory sensation prior to the occurrence of FTB, in most patients located at the body part where FTB occurred (n = 14, 43.8%). Premonitory sensations were reported to last on average 66.5 s (SD: 166 s, range: 1 s-1 h, median: 3 s). Fifteen patients (46.9%) gave descriptions such as having "a stone collar around ones neck", "dam inside the stomach", or "trembling inside the head". Twenty-seven patients (84.4%) reported being able to voluntarily suppress FTB on average for 71 min (SD: 128 min, range: 1 s-8 h, median: 15 min). For details see Table 1. Injuries due to functional symptoms Fifteen patients (46.9%) reported accidental self-injuries, but in no case, medical treatment was necessary. Nevertheless, 17 patients (53.1%) undertook precautionary measures to prevent possible injuries; nine (28.1%) avoided potentially dangerous objects such as knives; six (18.8%) used protective tools like cushions; three (9.4%) instructed family/friends to remove specific objects; two (6.3%) used plastic dishes; and four (12.5%) indicated others. Psychological stressors, unconscious intrapsychic conflicts, and structural deficits In all patients, unconscious intrapsychic conflicts (n = 11, 34.4%), structural deficits (n = 12, 65.6%) or both (n = 9, 28.1%) were found. Fourteen patients (43.8%) exhibited relevant autonomy-dependency-conflicts. We found a clear relationship between the presence of intrapsychic conflicts and structural deficits, respectively, and comorbid psychiatric symptoms with highest number of psychiatric symptoms in patients with intrapsychic conflicts and structural deficits (mean: 5), followed by those with structural deficits only (mean: 4.3) and with intrapsychic conflicts only (mean: 2.5). For details see Supplementary Table 2. In 22 patients (68.8%), timely-related psychological stressors were identified: substantial conflicts in family, partnership, at school/work (n = 9, 28.1%), considerable life changes such as parents separation, change of class/school, moving out or back into the parental home, moving out of close siblings, or occupational disability (n = 8, 25.0%), and confrontation with exceptional life events such as impending/recent surgery and sudden hard lockdown due to the pandemic (n = 5, 15.6%). In six of these patients, in addition, considerable structural deficits and comorbidities (personality disorders, PTSD, and pre-existing FTB) were found. In the remaining 16 patients, timely-related psychological stressors reactivated unconscious intra-psychic conflicts. Finally, in all of those patients (n = 19, 59.4%) still closely integrated in the family structure, dysfunctional dynamics between family members became apparent. Maintaining factors In all but one patient (96.9%), maintaining factors could be identified (multiple answers possible) such as granting of special privileges at home/school/work (n = 19, 61.3%, e.g., permission to leave the classroom whenever wanted, to write exams in separate rooms without supervision, relief from everyday household duties or more difficult duties at work); receipt of special attention from parents/friends/partners with more loving and caring interactions, increased social recognition at school, or increased attention on social media (n = 18, 58.1%); self-experienced improvement of social interactions due to MSMI-FTB (n = 4, 12.9%); and feeling sensations of relief after performing FTB (n = 4, 12.9%, e.g., "I feel better/more relaxed afterwards;" three of these patients had severe structural deficits as described above). Eleven patients (34.4%) reported displaying FTB on social media as tics/TS; another three patients (9.4%) each stated that they would like to do so, but parents would not allow or they would not be brave enough to do so. Social media use and overlap with "Gewitter im Kopf" All patients confirmed having watched "Gewitter im Kopf " and 30 (93.8%) stated using YouTube regularly. In all patients, onset of FTB was after the YouTube channel was launched and in 29 patients (90.6%) after having watched "Gewitter im Kopf " (missing data: n = 3, 9.4%). Based on a detailed analysis comparing patients' symptoms to those of the channel-host, there was an obvious overlap with respect to "complex" movements and vocalizations, inappropriate behaviors (n = 32, 100%, each), change in voice pitch when pronouncing FVS (n = 17, 53.1%), and giving FTB an old-fashioned German name (e.g., Günter, Uwe, Hildegard or Helga) similar to the channel-host, who named his disease "Gisela" (n = 17, 53.1%). For details see Table 2. Follow-up after diagnosis of social-media induced FTB At follow-up (mean: 4.8 months, range: 6 days-19 months), 17 patients (53.1%) reported an improvement of FTB on average of 74.3% (range: 15-99%, median: 80%). Of these, in 4 (12.5%), symptoms improved without treatment, while 13 patients (40.6%) had received treatment, which consisted in 9 (28.1%) of these of not specified psychotherapy and in 4 (12.5%) of other not specified treatments. In four patients (12.5%) symptoms markedly improved immediately after the diagnosis had been made. In the majority, symptoms improved slower Comparisons of subgroups Males versus females No significant sex differences were found (for details see Table 1). Patients with MSMI-FTB with and without TS When comparing tics in patients with comorbid TS (n = 15) and those without TS (n = 17), in the FTB only group we found (i) a significantly higher rate of abrupt onset (with a large effect) (p c < 0.001, V = 0.88), (ii) a significantly higher rate of constant increase of symptoms (with a large effect) (p c < 0.001, V = 0.76), and (iii) a significantly lower rate of OCB (with a large effect) (p c = 0.025, V = 0.62) (for details see Table 4). Discussion In this study, we present for the first time an in-depth characterization of a large cohort of patients with MSMI-FTB. Our main findings are: (i) we found a large symptom overlap within our sample, but also with symptoms presented by the host of the German YouTube channel "Gewitter im Kopf " justifying our recently suggested concept of a new type of MSI that-in contrast to recent outbreaks-spreads solely via social media and is induced by a "virtual" index case (1); (ii) in all patients, timely-related psychological stressors, unconscious intrapsychic conflicts, and/or structural deficits could be identified predisposing for contagion with MSMI-FTB; (iii) in nearly half of our patients, pre-existing TS was . /fpsyt. . diagnosed suggesting tics as another independent predisposing factor for MSMI-FTB; (iv) in most patients, abnormalities in social behavior and several further psychiatric symptoms were diagnosed; (v) in contrast to other recent reports (6-8, 10), half of our patients were male; however, there were no clinical differences between males and females; and (vi) preliminary follow-up data show that the course of MSMI-FTB spans widely from spontaneous complete remission within days to no improvement after months. Only recently, an increasing, but still small number of reports has been published describing mainly young people with FTB. When comparing main characteristics of our sample with previous reports (5-7, 11), a number of similarities, but also relevant differences could be identified. Similarities included (i) mainly rapid onset of symptoms (6-8, 10); (ii) mainly rapidly progressive course (6, 7, 11, 12); (iii) positive history of exposure to social media content related to FTB (5-7, 9, 11); (iv) triggering factors provoking certain symptoms (12) /fpsyt. . history of coexisting medically unexplained symptoms including other FTB (9); (vi) very similar pattern of movements and vocalizations across different centers, countries, and continents with predominantly "complex" movements involving the trunk, upper extremities, and head as well as "complex" vocalizations with swear words and insults (6, 7,9,11,12). This observation is in line with our hypothesis that spread of MSMI induces within a short time period "secondary virtual" index cases resulting in a global-and no longer locally restricted-outbreak (1); and (vii) misdiagnosis of tic disorder/TS, before patients are seen in specialized TS outpatient clinics (11). Different from available reports, we describe for the first time a small number of older patients (n = 4, three females, aged 30, 49, 51, and 53 years) affected with MSMI-FTB. However, in line with other reports (5, 7,9,11), the majority of our patients were adolescents or young adults (range: 11-23, median age: 18 years). In all patients over age 30, obvious structural deficits were found and in three of them, in addition, clinically relevant comorbidities such as abnormalities in social behavior (n = 3), TS, personality disorders, and PTSD (n = 2, each). This higher burden of psychopathologies may explain worse prognosis in adults compared to adolescents as reported recently by Howlett et al. (33). In contrast to recent studies from Canada, United Kingdom, and the United States reporting a female preponderance of about 9:1 (5- 7,9), in our sample males and females were equally distributed. Noteworthy, in another German sample (n = 13), an even higher male to female ratio (1.6:1) was found (11). In line with our concept of MSMI (1), we believe that different sex ratios are related to the fact that in Germany with Jan Zimmermann (15) a male person acts as the "virtual" index case, while in English speaking countries the most influential person ("thistrippyhippie") (17) is a female. Based on research by Bartholomew et al. (2) it is well-known that spread in MSI outbreaks is triggered by emotional arousal and identification. Since emotional contagion also spreads via social media (34,35) channel-hosts' sex may have an impact on sex ratios. Accordingly, most of our (male) patients reported finding the channel-host likable-at least in the beginning. Our detailed clinical characterizations enables us to add the following aspects to the existing literature: (i) numbers of complex movements and vocalizations were nine times greater than of "simple" symptoms, and of vocalizations one and a half times greater than of movements; (ii) premonitory sensations and suppressibility are often described, but largely differ from typical reports in patients with tics. When comparing our results to reports in TS, premonitory urges in TS are described as a feeling of pressure or tension located in the same body area as the corresponding tic lasting only milliseconds and thus are less complex, shorter, with less variants, and more circumscribed (36, 37). In TS, patients report to be able to suppress their tics on average for a few minutes and thus much shorter and with much less contingency (36, 37); (iii) aggravating, triggering, and improving factors are characteristic features for MSMI-FTB. Although patients with TS also report environmental factors with stress, anxiety and fatigue being the most common factors that may transiently exacerbate tics (38), descriptions clearly differ from those in patients with MSMI-FTB. For example, in MSMI-FTB, on average a much larger number of influencing factors is reported, symptoms often increase in the presence of other people, while tics in TS usually decrease when other people are around (39), and most patients report very specific or even rather peculiar triggers such as the presence of certain people e.g., teachers who are disliked, or calls for unloved obligations. Furthermore, a complete remission of symptoms for hours or even days or weeks as often described by patients with MSMI-FTB is rather unusual in patients with TS. However, similar to patients with TS, patients with MSMI-FTB most often report an improvement of symptoms while relaxing and concentrating (39); (iv) distractibility is a well-known phenomenon in FND that was also often seen in our sample of MSMI-FTB, while it is only partially present in patients with TS (40); (v) maintaining factors were found in nearly all patients and seem to play an important role in symptom persistence; and (vi) minor injuries due to functional symptoms are common often inducing disproportionate precautions. With respect to obscene/socially inappropriate functional symptoms, it is well-known that coprophenomena are complex tics that occur only in a small percentage of patients with TS (41). In addition, coprolalia in TS is usually characterized by only a small number of single words (and not countless and very complex utterances) that are often in addition masked by the pronunciation of only the first letters (41,42). Relation to the context might be another factor that may help to differentiate coprophenomena in TS from obscene/socially inappropriate functional symptoms in FTB. However, a clear distinction of obscene/socially inappropriate functional symptoms from non-obscene socially inappropriate behavior (NOSI) can be difficult and raises the question how to interpret NOSI in TS in general (43). Currently, there is increasing evidence that in TS a functional overlay is much more common than previously thought (unpublished data). Since first cases with MSMI-FTB occurred about 1 year (first onset in first patient: 2/2019, first presentation of first patient in our clinic: 5/2019) before the pandemic (first COVID-19 case in Germany: 1/2020), it can be excluded that the pandemic is the primary cause of this outbreak. However, it may have fueled spread due to increased social media use, increased anxiety, and loss of compensatory factors such as reduced personal contacts, hobbies, and daily activities (1), since during the pandemic a general increase in FND has been observed (12). In line with recent reports (6, 7), we found anxiety and depressive symptoms in a substantial number of patients. However, apart from that, we detected a broad spectrum of somatic as well as further psychiatric diagnoses and symptoms including OCB, sleeping problems, and ASD in . /fpsyt. . all but two patients, comorbid TS in nearly half of patients, abnormalities in social behavior in more than four-fifth, structural deficits in two-third, experience with bullying in nearly half of patients, and timely-related psychological stressors and underlying unconscious intrapsychic conflicts in about two-third of patients. Thus, it can be hypothesized that various factors predispose for contagion with MSMI-FTB that may be enhanced by dysfunctional family relations. From a psychodynamic perspective, it can be assumed that timely-related psychological stressors reactivated unconscious inner psychological conflicts and thus enabled destabilization of an already fragile psychological balance in individuals with pre-existing structural deficits (30). With respect to treatment, we recommend to also take these underlying psychopathological pattern as well as dysfunctional family relations into consideration. Since only one-third of our patients and 40% of parents felt relieved after being informed about the diagnosis of MSMI-FTB, more general treatment recommendations for FND may be useful (44) including an empathic and reassuring basic attitude. Of fundamental importance is the question why the first ever documented outbreak of MSMI presented with FTB. Following the LeRoy outbreak in 2011 (45), this is the second outbreak described presenting with FTB. Specific symptomatology of both FND in general (46) and the "motor variant" of MSI (47)(48)(49) is known to be cultural-bound and closely related to social demands. The majority of patients affected with MSMI-FTB are adolescents and young adults. This suggests that the period of identity formation characterized by questioning oneself, seeking new identities, and trying out new roles (50) plays a part. Since a substantial number of patients exhibits autonomy-dependence conflicts, it can be speculated that social media influencers serve as alternative role templates resulting in functional symptoms primarily characterized by taboo words, insults, offensive comments, socially inappropriate behaviors, and transgressive attitudes. Remarkably-and in line with recent reports (10)-a small number of our patients in parallel were engaged with issues of their own gender identity. The following limitations have to be addressed: (i) a selection bias cannot be excluded, since it can be assumed that more severely affected patients presented in our center; (ii) some answers were conflicting and unreliable and might be biased due to parental supervision during the interview, unawareness of subconscious conflicts within the family or complex psychodynamic situations; (iii) in some of the adult patients only self-reports were used which might have caused symptoms under-or overestimation (iv) in two patients, psychological interviews were performed with the parents only and not the patients. One of these interviews was done only via phone; (v) although this is the largest sample of MSMI-FTB, the sample size is still relatively small; (vi) although data were collected prospectively using a semi-structured interview, accuracy of statements regarding psychopathological background including intrapsychic conflicts and structural integration can be further increased by using a standardized questionnaire; and (vii) another subject of future research should also be the investigation of the role of education and occupational status in the development of FTB. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The study was reviewed and approved by Local Ethics Committee at Hannover Medical School (No. 8995_BO_S_2020). Written informed consent to participate in this study was provided by the participants and their legal guardian/next of kin. Author contributions CF contributed to conception and design of the study, organization of the database, collection, analysis and interpretation of data, and wrote the first draft of the manuscript. NS contributed to data analysis and interpretation. AP contributed to the conception of the study, organization of the database, and data analysis. MH contributed to the statistical analysis of the study. LL contributed to the organization of the database and the collection of the data. CW contributed to the conception and design of the study. KM-V contributed to conception, design of the study and collection, and analysis and interpretation of data. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-09-20T14:10:38.876Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "589e212c71468ceab658c8c69c1a9079467dd2fd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2022.963769/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "589e212c71468ceab658c8c69c1a9079467dd2fd", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
249994053
pes2o/s2orc
v3-fos-license
SARS-CoV-2 escapes direct NK cell killing through Nsp1-mediated downregulation of ligands for NKG2D Summary Natural killer (NK) cells are cytotoxic effector cells that target and lyse virally infected cells; many viruses therefore encode mechanisms to escape such NK cell killing. Here, we interrogate the ability of SARS-CoV-2 to modulate NK cell recognition and lysis of infected cells. We find that NK cells exhibit poor cytotoxic responses against SARS-CoV-2-infected targets, preferentially killing uninfected bystander cells. We demonstrate that this escape is driven by downregulation of ligands for the activating receptor NKG2D (NKG2D-L). Indeed, early in viral infection, prior to NKG2D-L downregulation, NK cells are able to target and kill infected cells; however, this ability is lost as viral proteins are expressed. Finally, we find that SARS-CoV-2 non-structural protein 1 (Nsp1) mediates downregulation of NKG2D-L and that Nsp1 alone is sufficient to confer resistance to NK cell killing. Collectively, our work demonstrates that SARS-CoV-2 evades direct NK cell cytotoxicity and describes a mechanism by which this occurs. INTRODUCTION Natural killer (NK) cells are innate lymphocytes that play a critical role in the immune response to viral infection. [1][2][3][4] Since the advent of the COVID-19 pandemic, studies examining the immune response in COVID-19 have noted that NK cells are less abundant in the peripheral blood of severe COVID-19 patients than in healthy donors [5][6][7][8][9][10][11][12][13] ; a concurrent increase in NK cell frequency in the lungs of critically ill patients suggests that peripheral depletion of NK cells may be due to trafficking to the site of infection. 14 In addition, immune profiling has uncovered significant, severity-associated phenotypic and transcriptional changes in the peripheral NK cells that remain in the blood of COVID-19 patients. In severe COVID-19, peripheral blood NK cells become activated and exhausted. 6,7,9,11,13,[15][16][17] They also downregulate surface level expression of the activating receptors NKG2D and DNAM-1, possibly as a consequence of internalization after ligation 7,10 and exhibit defects in their ability to respond to tumor target cells and cytokine stimulation compared with NK cells from healthy donors. 11,13,15 Less is known about how NK cells respond directly to SARS-CoV-2-infected cells, although several studies have demonstrated that NK cells can suppress SARS-CoV-2 replication in vitro. 16,18,19 Moreover, a recent study found that NK cells are able to mount robust antibody-mediated responses against SARS-CoV-2-infected target cells. 20 However, the mechanisms underlying NK cell responses to SARS-CoV-2-infected cells are not understood. This is particularly important because many viruses employ mechanisms that allow them to evade recognition and killing by NK cells. For example, both HIV-1 and human cytomegalovirus downregulate the ligands for NK cell activating receptors, shielding infected cells from recognition by NK cells. [21][22][23][24][25][26][27][28][29][30][31][32] In this study, we utilized primary NK cells from healthy donors in conjunction with replication-competent SARS-CoV-2 to create an in vitro model system that dissects the NK cell response to SARS-CoV-2-infected cells. We focused on assessing the direct killing of infected target cells to better understand how the balance between SARS-CoV-2 recognition and escape contributes to disease. Our results demonstrate that SARS-CoV-2-infected cells efficiently escape killing by healthy NK cells, likely due to downregulation of ligands for the activating receptor NKG2D. Furthermore, we interrogated the mechanisms underlying this phenomenon and identified a specific SARS-CoV-2 protein, non-structural protein 1 (Nsp1), that mediates escape from NK cell recognition. Collectively, our work deeply interrogates the NK cell response to SARS-CoV-2 and provides insight into the role of NK cells in COVID- 19. SARS-CoV-2-infected cells evade NK cell killing through a cell-intrinsic mechanism We established a system to explore the NK cell response to SARS-CoV-2 infection using A549-ACE2 cells, 33 which are lysed by NK cells and are infectible with SARS-CoV-2. We infected A549-ACE2 cells with SARS-CoV-2/WA1-mNeonGreen 34 (which replaces ORF7a with mNeonGreen) at a multiplicity of infection (MOI) of 0.5 ( Figure 1A). After 24 h, approximately 6% of cells fluoresced green, increasing to 50% by 48 h ( Figure 1A). This suggests that, although SARS-CoV-2 only requires $8 h to complete its life cycle, 35,36 48 h is required for detection of robust viral protein expression in a low MOI system in which viral replication results (D and E) Background-subtracted percentage of A549-ACE2 cell death as measured by eFluor 780 viability dye staining in either infected versus exposed, uninfected cells (D) or mock-infected versus exposed, uninfected cells (E). Background cell death for each experiment and condition was calculated as the average level of death in four wells of the condition of interest. Data are shown from n = 18 unique healthy donors across 4 separate experiments. Lines connect points from individual donors. (F and G) Representative flow plots (F) and quantitations (G) of percentage of NK cells expressing CD107a and IFN-g upon culture with no targets, mock-infected targets, or SARS-CoV-2-infected targets. Lines connect points from individual donors (n = 6). Significance values for all plots in this figure were determined using a paired Wilcoxon signed-rank test with the Bonferroni correction for multiple hypothesis testing. in spreading infection. To understand how exposure to SARS-CoV-2-infected target cells impacts NK cell phenotype and function, we added NK cells from healthy donors that had been preactivated overnight with IL-2 to target cells that had been infected for 48 h ( Figure 1B). This is an important distinction from previous studies that added NK cells early after SARS-CoV-2 infection, before the virus-infected cell expresses the full complement of viral proteins. 16,18,19 We then assessed the ability of NK cells to directly lyse SARS-CoV-2-infected (mNeonGreen+) target cells compared with bystander (exposed but mNeonGreenÀ) and mock-infected cells ( Figures 1C-1E). NK cell co-culture induced significantly more death of uninfected ''bystander'' cells than of SARS-CoV-2-infected cells in all 18 NK cell donors tested ( Figure 1C). We found no significant difference in the killing of bystander cells compared with mock-infected cells that were never exposed to SARS-CoV-2, indicating that the ability of SARS-CoV-2-infected cells to survive is a cellintrinsic effect ( Figure 1D). To ensure that these differences were not a result of rapid cell death resulting in cell loss and undercounting of killed SARS-CoV-2-infected cells, we assessed the ratio of infected (mNeonGreen+) target cells to uninfected (mNeonGreenÀ) target cells in cultures without NK cells compared with cultures with NK cells, gating only on ''live'' versus ''total'' cells. There was no difference in this ratio among all single cells (not live gated) in the presence and absence of NK cells, suggesting that if cells are disappearing from culture due to apoptosis, they are disappearing at an equal rate among infected and bystander cells ( Figure S1). Meanwhile, the ratio of mNeonGreen+ cells to mNeonGreenÀ cells was increased in live-gated cells upon addition of NK cells due to preferential killing of uninfected target cells by NK cells ( Figure S1). SARS-CoV-2-infected cells do not actively inhibit NK cell functionality We next interrogated changes in NK cell phenotype and function induced by co-culture with mock-or SARS-CoV-2-infected target cells. Importantly, we continued utilizing an MOI of 0.5, resulting in around 50% infection of the SARS-CoV-2-infected wells. We observed significant induction of CD107a, a marker of NK cell degranulation and surrogate for cytolytic activity, and IFN-g upon culture with either SARS-CoV-2-infected or mock-infected A549-ACE2 cells ( Figures 1F and 1G). Activation occurred primarily within the CD56 bright CD16 low subset, possibly due to IL-2 priming ( Figures 1F and S2). We also found no significant differences in the expression of other phenotypic and functional markers on NK cells co-cultured with SARS-CoV-2-infected targets compared with those cultured with mock-infected cells ( Figure S2). This suggests that, while healthy NK cells are unable to lyse SARS-CoV-2infected cells, the presence of SARS-CoV-2-infected cells does not inhibit the NK cell response to bystander cells. Collectively, these results support a model in which a factor intrinsic to SARS-CoV-2-infected cells allows escape of NK cell killing. SARS-CoV-2 infection modulates expression of ligands involved in NK cell recognition We next investigated the mechanism by which SARS-CoV-2-infected cells were able to evade lysis by NK cells. We used flow cytometry to profile the expression of the ligands for various NK cell activating and inhibitory receptors. 3 We grouped antibodies for ligands recognized by the same receptor into a single channel to quantify total ligand density for a given receptor. While expression of CD112/CD155 (ligands for DNAM-1), CD54 (ligand for LFA-1), and HLA-A/B/C were decreased in infected cells compared with mock and bystander cells, the magnitude of these reductions was relatively small. In contrast, the ligands for NKG2D (MICA, MICB, and ULBPs 1, 2, 5, and 6; collectively referred to as NKG2D-L) were downregulated to a much greater extent in SARS-CoV-2-infected cells compared with uninfected cells and bystander cells (Figures 2A, 2B, and S4A). All of the individual ligands comprising NKG2D-L were strongly downregulated in SARS-CoV-2-infected cells compared with mock-infected controls ( Figures 2C and S4B). Notably, the downregulation of NKG2D-L and the downregulation of HLA-A/B/C (MHC class I) would be expected to have opposing effects on the NK cell response to infected cells: downregulation of MHC class I would enhance NK cell recognition of infected targets, while NKG2D-L downregulation could represent a mechanism of NK cell evasion. As we observed a decrease in the ability of NK cells to kill SARS-CoV-2-infected cells and other studies have already interrogated MHC class I downregulation by SARS-CoV-2, 37-39 we focused our attention on the loss of NKG2D-L as a potential evasion mechanism. Downregulation of NKG2D-L is correlated with inhibition of NK cell killing of SARS-CoV-2-infected cells To evaluate the association between NKG2D-L expression and killing of SARS-CoV-2-infected cells, we assessed NKG2D-L expression on the cells that survived following co-culture with NK cells. We identified a significant decrease in the frequency of NKG2D-L-expressing target cells in wells containing NK cells at both time points and across all infection conditions, suggesting that NK cells preferentially kill NKG2D-L-expressing targets in both SARS-CoV-2-infected and mock-infected wells (Figure 2D). We also assessed the kinetics of NKG2D-L expression on infected (mNeonGreen+) A549-ACE2 and found that, while NKG2D-L were downregulated to some extent at 24 h postinfection compared with uninfected cells, it was not until 48 h post-infection that we observed almost total loss of these proteins at the surface level ( Figure 2E). We therefore hypothesized that NK cells would kill infected cells more robustly at 24 h post-infection compared with 48 h. Indeed, we observed significantly better killing of mNeonGreen+ target cells at 24 h post-infection compared with 48 h ( Figure 2F). Further supporting a model in which downregulation of NKG2D-L allows for evasion of NK cell killing, we identified a strong correlation between the expression of NKG2D-L in target cells and target cell lysis across all time points and infection conditions ( Figure 2G). NK cells are able to efficiently kill SARS-CoV-2-infected cells immediately following infection Other groups have reported that NK cells are able to successfully suppress viral replication in a system where the NK cells are added to a target cell culture soon after infection with SARS-CoV-2. 16 Figure 3A), we compared total killing of all target cells in SARS-CoV-2-infected wells at 0 and 48 h. We found that, as expected, NK cells were able to robustly kill cells that were freshly infected (0 h) but not those that had been infected for 48 h ( Figure 3B). Moreover, NK cells were slightly better at killing infected cells compared with mock-infected cells at the 0 h time point ( Figure 3C), providing additional evidence that NK cells can successfully target infected cells in the early stages of SARS-CoV-2 infection, as previously reported. 16,18,19 Finally, we conducted a similar analysis of total cell killing at 24 versus 48 h post-infection. In accordance with our other findings, we observed that NK cells can efficiently kill virus-exposed cells through 24 h post-infection, but not at 48 h ( Figure 3D). Thus, our data and other published works collectively suggest that NK cells are capable of suppressing viral replication, but their ability to do so is significantly hampered if the cell has been infected for at least 48 h. SARS-CoV-2 protein Nsp1 downregulates ligands for NKG2D Having identified changes in the protein-level expression of NKG2D-L in SARS-CoV-2-infected cells that may underlie escape from NK cell killing, we next sought to understand how the virus mediates this effect. SARS-CoV-2 encodes 29 individual proteins that are broadly classified into 3 categories: structural, non-structural, and accessory. While the roles of these proteins are still being investigated, many of the non-structural and accessory proteins are known to suppress antiviral innate immune responses. [40][41][42][43] We therefore transfected each individual SARS-CoV-2 protein, tagged with two Strep Tag domains (Strep Tag II) to allow for easy detection, into A549-ACE2 cells and assessed for their effect on NK cell receptor ligand expression by flow cytometry ( Figures 4A and 4B). We successfully transfected 25 of the 29 SARS-CoV-2 proteins into A549-ACE2s; we also transfected cells with GFP as a non-viral control ( Figures S5A-S5C). While several proteins downregulated NKG2D-L, SARS-CoV-2 non-structural protein 1 (Nsp1) had by far the strongest effect ( Figures 4C and S5D). Several other viral proteins, primarily accessory proteins, also downregulated NKG2D-L expression, and some increased expression. However, as Nsp1 had the most impact on NKG2D-L expression, we chose to move forward with interrogation of this protein. Like replication-competent SARS-CoV-2, Nsp1 also downregulated MICA, ULBP-1, and ULBPs-2, 5, and 6. However, it had no effect on MICB ( Figure 4D). To ensure that the downregulation of NKG2D-L that we observed was not an artifact of the cell line we were using, we also transfected Nsp1 into 293T cells and K562 cells. Nsp1 downregulated NKG2D-L expression in both cell lines, which express NKG2D-L at baseline ( Figure S6A). Nsp1 also mediated downregulation of MHC class I, but not CD54 or the ligands for DNAM-1, in A549-ACE2s ( Figures 4F and S6B-S6D). SARS-CoV-2 post-transcriptionally downregulates NKG2D-L and does not induce shedding, intracellular retention, or degradation Nsp1, also known as the SARS-CoV-2 leader protein, is the first protein translated when the virus enters a cell and serves as a global inhibitor of host translation. Nsp1 is highly conserved across coronaviruses as it plays an important role in enhancing pathogenicity by inhibiting the innate immune response. [44][45][46][47][48][49] Schubert et al. demonstrated that SARS-CoV-2 Nsp1 functions by sterically inhibiting entry of mRNA into the mRNA channel of the 40S ribosomal subunit. 49 Thus, it is likely that Nsp1 mediates a translational block to reduce surface NKG2D-L expression. To orthogonally validate that NKG2D-L expression is reduced via translational blockade in SARS-CoV-2-infected cells, we assessed several other potential methods of downregulation. Consistent with a model of translational inhibition, we observed only a small decrease in transcripts encoding MICB, ULBP-1, and ULBP-2 in infected cells compared with mock-infected cells and no decrease in MICA transcript levels ( Figure 5A). This modest difference likely reflects the overall decrease in transcript levels in cells infected with SARS-CoV-2 and is consistent with the idea that NKG2D-L expression is reduced at the post-transcriptional level. We also assessed whether SARS-CoV-2 might induce degradation of NKG2D-L, as CMV has also been shown to downregulate NKG2D-L through targeting these proteins for proteasomal or lysosomal degradation. 31,32 We therefore treated mock or SARS-CoV-2-infected cells with a proteasomal inhibitor (MG-132) or a lysosomal inhibitor (BAF-A1) and then assessed NKG2D-L expression; we found that neither inhibitor rescued NKG2D-L expression in infected cells ( Figures 5B and S8A). Finally, we addressed the possibility of SARS-CoV-2-infected cells shedding of NKG2D-L from the cell surface, which has been reported for other viruses and in the setting of cancer, 24,50,51 by assessing NKG2D-L levels in the supernatants of mock-and SARS-CoV-2-infected cultures by ELISA. We quantified levels of soluble MICA and soluble ULBP-2 ( Figure 5C) as these were the two most highly expressed NKG2D ligands on mock-infected cells ( Figure 2C). We were unable to detect either of these proteins in the supernatants of uninfected or infected cultures, suggesting that secretion of NKG2D ligands is not a major mechanism by which NKG2D-L is downregulated by SARS-CoV-2 ( Figure 5C). Collectively, these data suggest that NKG2D-L are downregulated post-transcriptionally and are not degraded or shed in SARS-Cov-2-infected cells. While this supports the hypothesis that Nsp1 inhibits expression of these proteins by translational blockade, we were unable to definitively prove this mechanism, as expression of NKG2D-L could be suppressed by another mechanism such as intracellular retention. 23,29 NKG2D-L have a high rate of surface turnover Although Nsp1 is a global inhibitor of host translation, our data show that it does not equally downregulate all NK cell receptor ligands. We hypothesized that this might be due to differential rates of surface expression turnover across the various ligands, as these proteins are known to have varying levels of stability on the cell surface. 32,52-54 NKG2D-L in particular are rapidly turned over to allow for a high degree of control over its expression level. 32,52 To validate that non-specific inhibition of a post-transcriptional mechanism could have an outsized effect on NKG2D-L in comparison with the other ligands assayed, we treated A549-ACE2s with the protein transport inhibitor Brefeldin A and measured expression of NK cell receptor ligands after 24 or 48 h ( Figure S7). We observed that Brefeldin A, like Nsp1, had a much larger effect on NKG2D-L than on other ligands, including CD54 and DNAM-1 ligands, supporting a model in which global translation inhibition, such as that mediated by Nsp1, could much more dramatically downregulate NKG2D-L than other surface proteins. Nsp1 is not highly expressed until more than 24 h postinfection Thus far, our analyses of NK cell evasion mediated by replication-competent SARS-CoV-2 have relied on mNeonGreen as a correlate of viral protein expression. However, having determined that Nsp1 is the viral protein with the strongest effect on NKG2D-L expression, we wanted to validate (1) that mNeon-Green expression correlates with Nsp1 expression and (2) that Nsp1 expression inversely correlates with NKG2D-L expression in SARS-CoV-2-infected cells. We therefore stained SARS-CoV-2-infected or mock-infected A549-ACE2s with an anti-Nsp1 antibody and compared expression of Nsp1 to expression of mNeonGreen by flow cytometry. We found that essentially all mNeonGreen+ cells also expressed Nsp1 ( Figure 6A). In addition, we determined that, like mNeonGreen, we could not detect high levels of Nsp1 expression until >24 h post-infection (Figure 6A); this aligns with our data demonstrating that SARS-CoV-2-infected cells are not fully resistant to NK cell killing until >24 h post-infection ( Figures 2E and 2F). While all mNeonGreen+ cells also expressed Nsp1, there was a significant population of cells ($10%) at 48 h post-infection that Figure 6A). This can likely be explained by the fact that Nsp1 is encoded at the 5 0 -most end of the SARS-CoV-2 genome and is thus the first viral protein to be translated. [44][45][46][47][48][49] This suggests that identification of infected cells based solely on mNeonGreen expression slightly underestimates the number of infected cells and likely explains why bystander cells appear to have slightly decreased expression of NKG2D-L compared with mock-infected cells; the bystander population includes some cells that have been recently infected and express Nsp1 but not yet mNeonGreen. It also allowed us to assess expression of NKG2D-L-infected cells subsetted by their expression of mNeonGreen and Nsp1. As expected, Nsp1À mNeonGreenÀ (Q4) cells had high expression of NKG2D-L, while Nsp1+ mNeonGreen+ (Q2) cells had lost almost all expression of NKG2D-L. However, Nsp1+ mNeonGreen-(Q3) cells had an in-termediate level of NKG2D-L expression, with roughly 20% of this population expressing these ligands ( Figure 6B). We hypothesize that these cells are more recently infected and have not yet expressed the full complement of viral proteins. Therefore, these data suggest that NKG2D-L downregulation precedes expression of at least some viral proteins. Nsp1 is sufficient to confer significant resistance to NK cell-mediated killing We hypothesized that, if Nsp1 is the key mediator of NKG2D-L downregulation in SARS-CoV-2 infection, transfection with Nsp1 should be sufficient to confer resistance to NK cell killing. To test this hypothesis, we co-cultured activated, healthy NK cells with cells that had been transfected with either Nsp1 or a control plasmid (GFP) and assessed target cell killing by flow cytometry. Indeed, we found that NK cells were significantly more effective at killing GFP-transfected targets compared with Nsp1-transfected targets in both A549-ACE2s and 293Ts ( Figures 7A, 7B, and S8). To determine whether other viral proteins might also mediate escape from NK cell killing, we compared killing of Nsp1-transfected target cells with killing of cells transfected with other SARS-CoV-2 proteins (Figures 7C and S8). We randomly selected 10 additional SARS-CoV-2 proteins to test alongside Nsp1. Each protein was transfected into A549-ACE2s and healthy NK cell killing of transfected cells was assessed 48 h post-transfection. We distinguished transfected cells from untransfected cells within the same well by gating on Strep Tag II expression. Of the 11 proteins transfected, Nsp1-transfected cells were killed significantly less than those transfected with any other plasmid except Nsp10 (no significant difference) ( Figure 7C). Nsp1 was also the only protein that significantly protected transfected cells from NK cell killing ( Figures 7C and S8D). Moreover, 6 of the other 10 proteins tested caused a significant increase in NK cell killing of transfected cells ( Figure S8D). Collectively, these data suggest that Nsp1 is sufficient to protect cells from NK-mediated killing and that resistance to NK cell killing in infected cells overcomes the increase in susceptibility to killing caused by other SARS-CoV-2 proteins. Finally, we sought to compare the protection from NK cell killing mediated by Nsp1 transfection to that conferred by infection with replication-competent SARS-CoV-2. Like SARS-CoV-2, Nsp1 was able to provide significant protection to cells that received the protein versus bystander cells in the same well (Figure 7D). We then quantified protection from killing by calculating the fold change in killing of treated (Nsp1-transfected or SARS-CoV-2-infected) compared with bystander cells for each donor and found that there was no significant difference between the level of protection mediated by Nsp1 and that mediated by SARS-CoV-2 ( Figure 7E). DISCUSSION The role of NK cells in mediating clearance of SARS-CoV-2-infected cells in vivo remains unclear. While several studies have demonstrated that NK cells can reduce the levels of SARS-CoV-2 replication in vitro, no prior study has directly evaluated killing of SARS-CoV-2-infected cells. Here, we address this critical gap in knowledge and demonstrate that SARS-CoV-2-infected cells escape killing by healthy NK cells in a cell-intrinsic manner, while killing of uninfected bystander cells is uninhibited. The ability of infected cells to evade NK cell recognition requires infection to proceed long enough to allow an infected cell to express SARS-CoV-2-encoded proteins. We demonstrate that this escape mechanism is driven by downregulation of ligands for NKG2D, a critical activating receptor on NK cells. We further demonstrate that this ligand downregulation is driven by the SARS-CoV-2 Nsp1 protein and show that Nsp1 alone is sufficient to mediate direct NK cell evasion. While our experimental system using a cell line with high expression of NKG2D-L could enhance the degree of bystander killing, these findings have important implications for NK cell-mediated control of SARS-CoV-2, as preferential escape of infected cells and possible killing of bystander cells could contribute to SARS-CoV-2 pathogenesis. These results illustrate the importance of examining the temporal dynamics of the NK cell response to SARS-CoV-2-infected cells. Other studies have assessed the ability of NK cells to suppress viral load by co-culturing NK cells with SARS-CoV-2infected targets early after infection; their results suggest that, under these conditions, NK cells can at least partially control viral replication. 16,18,19 It is worth noting that these other studies also varied from ours in parameters such as target cell type, cytokine treatment of NK cells, E:T ratio, and duration of co-culture. Our own observations demonstrate that NK cells are no longer able to effectively kill infected cells when added to the culture at 48 h post-infection, after the expression of viral proteins that suppress the innate immune response. The preferential killing of NKG2D-L-positive bystander cells may have important implications for lung pathology during COVID-19. NKG2D-L can be expressed by most cell types 55 and are upregulated during viral infections, including HIV 56 and RSV, 57 in response to stress. 58 Therefore, it is possible that NK cells may actually cause damage to the healthy tissue surrounding infected cells rather Article ll OPEN ACCESS than clearing the infection, although this hypothesis has not yet been directly tested in primary lung tissue. As NK cells appear to home to the lungs during COVID-19, 59-61 our findings indicate that the timing of NK cell trafficking to the site of infection may impact the efficacy of the NK cell response to SARS-CoV-2 infection, as there is a very narrow window for killing of infected cells before bystander killing could ensue. Interestingly, Witkowski et al. observed that frequency of peripheral blood NK cells in severe COVID-19 patients negatively correlated with viral load; however, this is difficult to interpret in the context of our data because it is unknown whether the increased NK cell frequencies observed resulted from decreased trafficking to the lungs, increased peripheral proliferation, or another mechanism. 19 Our novel finding that the SARS-CoV-2 protein Nsp1 mediates evasion of NK cell killing has significant implications for both the study of the immune response to coronaviruses and the development of therapeutics for COVID-19. Nsp1 is highly conserved across coronaviruses and is an essential virulence factor; it has been shown to inhibit translation of host antiviral factors across multiple beta-coronaviruses. 44-49, 62 One study found that, among nearly 50,000 SARS-CoV-2 sequences analyzed, only 2.4% had any mutations within Nsp1. 46 SARS-CoV-2 Nsp1 also shares 84.4% of its sequence identity with SARS-CoV Nsp1. Moreover, critical motifs within Nsp1 involved in the inhibition of innate immune responses are highly conserved across many beta-coronaviruses. 46 On a practical level, the high degree of conservation of Nsp1 and its importance in coronavirus virulence have already made this protein the focus of several therapeutic strategies. 44,63,64 Our work demonstrates that Nsp1 is an even more attractive target than previously thought, as inhibiting the function of this protein has the potential to fully or partially rescue the NK cell response to SARS-CoV-2-infected cells. Although Nsp1 is a global inhibitor of host translation, our data demonstrate that it has an outsized effect on NKG2D-L and MHC class I surface expression compared with that of other ligands for NK cell receptors. This appears to be due to the varying stabilities of the different ligands on the cell surface, rather than explicit specificity of Nsp1 for NKG2D-L or MHC class I. It has been established that NKG2D-L are rapidly turned over on the cell surface and are quickly lost upon treatment with a protein transport inhibitor such as Brefeldin A. 32,52 MHC class I is similarly transient on the cell surface in the presence of translation inhibition, although its stability varies with haplotype and peptide binding. 53 CD54, which was not affected by Nsp1, is highly stable for at least 48 h, even after treatment with similar inhibitors. 54 Thus, the differential effects of Nsp1 on various ligands for NK cell receptors are likely explained by the varying kinetics of surface turnover. One of our findings that has been demonstrated by multiple groups is the downregulation of MHC class I upon SARS-CoV-2 infection. The mechanism of this downregulation remains unclear; while our data suggest that Nsp1 is responsible for this loss, ORF3a, 37 ORF7a, 37 ORF6, 38 and ORF8 39 have also been implicated. This could be due to differential downregulation of various HLA molecules by different SARS-CoV-2 proteins. In our study, we grouped together HLAs A, B, and C as there are no commercially available antibody clones that can robustly differentiate HLAs A and B; this is an important limitation of our work. According to the well-established ''missing self'' model of NK cell activation, 65,66 the downregulation of self-MHC can induce NK cell activation through subsequent lack of inhibitory signaling through the killer cell immunoglobulin-like receptors. Therefore, it might be expected that the downregulation of MHC by SARS-CoV-2 would enhance the ability of NK cells to lyse infected cells-precisely the opposite of what we observed in our study. We hypothesize that this can be explained by (1) the relative magnitudes of MHC class I and NKG2D-L downregulation on infected cells and (2) the accepted dogma in the field that missing self alone is not sufficient to cause robust NK cell activation. 67,68 As a result, we propose that the loss of NKG2D-L is the dominant factor in the NK cell response (or lack thereof) to SARS-CoV-2. While our study focuses on direct lysis of target cells, NK cells can also kill through antibody-dependent cellular cytotoxicity. A recent study by Fielding et al. found that antibody-dependent NK cell activation can overcome SARS-CoV-2's inhibition of direct cytotoxicity, allowing healthy NK cells to mount stronger responses to infected targets. Thus, prior vaccination or infection that results in pre-existing antibodies to SARS-CoV-2 could tip the balance in favor of killing SARS-CoV-2-infected cells. This study also identified downregulation of NKG2D-L on SARS-CoV-2-infected cells through an orthogonal method. 20 This work has significant implications for the ongoing study of COVID-19. Our results deeply interrogate a potential flaw in the ability of the immune system to mount a comprehensive immune response to COVID-19. We demonstrate that the timing of the NK cell response to SARS-CoV-2-infected target cells is critical, with NK cells being able to control viral replication early in infection, but not after expression of viral proteins has begun. This should be further interrogated in vivo to explore whether the kinetics of NK cell trafficking during COVID-19 affects disease outcome. Finally, we reveal that SARS-CoV-2 protein Nsp1 is a major factor in mediating evasion of NK cell killing. This finding reinforces the attractiveness of Nsp1 as a therapeutic target. Limitations of the study Our study has several limitations. To focus on NK cell responses in the respiratory tract, we used A549-ACE2 cells, which are an immortalized, malignant cell line. This could therefore have enhanced NK cell targeting of bystander cells. In addition, while we demonstrated that Nsp1 was sufficient to confer NK cell escape, we were unable to test whether the absence of Nsp1 rescues NK cell killing because knockout of Nsp1 is lethal to the virus. We also did not fully evaluate why Nsp1 blocks NKG2D-L more effectively than other proteins, but we hypothesize that these proteins are downregulated first as part of the global translation block because they are turned over on the cell surface more quickly and cannot be replaced. Finally, we did not interrogate the ability of every individual SARS-CoV-2 protein to mediate escape from NK cell killing. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following:
2022-06-24T13:03:45.118Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "5bc3b205f81e1dbdb945fdab3f3ae9e7b6bf721b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.celrep.2022.111892", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "497f108498118c8a3989807d2b5ef1459061b37b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
266945995
pes2o/s2orc
v3-fos-license
A Nail in the Brain Transorbital penetrating brain injuries (TOPI) are rare. We report a case of industrial injury that resulted in perforating eye injury and intracranial foreign body by a nail gun. A 30-year-old man accidentally fired a nail gun onto his left eye at his construction workplace while handling the malfunctioned equipment and sustained a perforating injury of the left eye with intracranial foreign body. The misfired nail was lodged in his frontal lobe of the brain. He also suffered laceration wounds of the lateral canthus of the left eye and fractures of the left orbital floor and roof. He underwent emergency bicoronal craniotomy and removal of intracranial foreign body, followed by left eye examination under anaesthesia as well as scleral toilet and suturing. The nail was successfully removed. He recovered well with no neurological deficit and was discharged on postoperative day 5 with a Glasgow Coma Scale score of 15; however, his left eye vision remained no perception of light. Work-related eye injuries can be debilitating and are largely preventable. Introduction Penetrating brain injury is not common, accounting for 0.4% of head injuries [1].Transorbital penetrating brain injury (TOPI) is even rarer, accounting for 24% of penetrating head trauma in adults and 45% in children [2].Although rare, TOPI can cause serious neurological and ophthalmic disabilities.We herein report a case of industrial injury that resulted in perforating eye injury and intracranial foreign body by a nail gun. This case report was presented as a poster at the 11th COSC UM -APOT Ophthalmic Trauma Meeting 2022 on September 17, 2022. Case Presentation A 30-year-old man, a foreign construction worker, presented to the hospital with loss of vision and bleeding of the left eye following an accident that occurred at his construction workplace.He was using a pneumatic nail gun without wearing protective goggles.When the nail gun jammed, he checked it through the gun barrel and accidentally fired the nail gun onto his left eye. Upon arrival to the hospital, his Glasgow Coma Scale (GCS) score was 15, and vital signs were stable.Other than left eye wounds and swelling, primary and secondary examinations did not reveal additional injuries.He had no past significant medical, surgical, or drug history.He complained of headache and left eye pain, and otherwise he was cooperative and fully oriented during the examination.The muscular strength and tension of all four limbs were normal.His blood analyses and other biochemical parameters were within normal limits.He was given antitetanus injection, intravenous broad-spectrum antibiotics, and anticonvulsant medication. Examination of the left eye revealed left eye proptosis with laceration wounds on the lateral canthus.Vision of the left eye was no perception light (NPL).There was left periorbital hematoma, extensive subconjunctival hemorrhage and chemosis, prolapse of uveal tissue from the temporal side of the eyeball, and total hyphema, which obscured visualization of the pupil (Figure 1).Reverse relative afferent pupillary defect (RAPD) of the left eye was positive.Examination of the right eye was unremarkable.Plain skull X-ray showed a nail that had penetrated the left orbital roof (Figure 2).The nail, measuring 3.2 cm, was lodged in the frontal lobe of the brain as revealed by plain computed tomography (CT) (Figure 3).There was also the presence of subdural and subarachnoid hemorrhages at the left frontotemporal region extending to the left parietal region.There were fractures of the left orbital roof and floor (Figure 4).He sustained perforating injury of the left eye with the misfired nail being lodged in the frontal lobe, laceration wounds of the lateral canthus of the left eye, and fractures of the left orbital roof and floor.The patient was co-managed with neurosurgical team and underwent emergency bicoronal craniotomy and removal of intracranial foreign body.Intraoperatively, the nail was found to be penetrated in the parenchyma of bilateral frontal lobes of the brain.The left orbital roof was fragmented with bony pieces that pierced through the dura and brain parenchyma.However, the left internal carotid artery, anterior cerebral arteries, and the olfactory nerves were not injured.The nail was successfully removed as a single piece, and there was no active bleeding after removal of the nail.The surgery was immediately followed by left eye examination under anesthesia, as well as scleral and lid repair.Intraoperatively, there were extensive extrusion of the vitreous and uveal tissue and the crystalline lens via the scleral laceration wound, which measured 10 mm vertically and extended 17 mm posteriorly.The wound was tracked as far as attainable, and the superior and superonasal parts of the globe were explored; no other laceration wound was found. The extruded and non-viable content of the globe was removed, and the scleral wound was closed with 7/0 absorbable sutures.However, severe proptosis due to retro-orbital edema had precluded the apposition of the severely macerated lateral canthus. Postoperatively, the patient was admitted to the intensive care unit, where intravenous ceftriaxone, metronidazole, and anticonvulsants were administered.There was no cerebrospinal fluid leakage or active bleeding.To prevent exposure keratopathy due to severe proptosis and lagophthalmos, a moisture chamber was created for him.This was prepared using transparent film waterproof dressing covering the left eye with topical chloramphenicol eyedrops and ointment.Daily dressing of the left eye was done.He was extubated on the next day. He recovered well during his postoperative period with no neurological deficit and was discharged on postoperative day 5 with a GCS score of 15.His left eye proptosis and lagophthalmos improved with fairly clear cornea and hyphema.However, his left eye vision remained NPL.He was discharged with the advice of continuing the moisture chamber.He was seen after one week in the eye clinic.Proptosis and chemosis had much reduced, eyeball was soft, but there was still lagophthalmos.The cornea was fairly clear without signs of exposure keratopathy, and the anterior chamber was formed with hyphema.The left lateral canthal wound was clean, and he was planned for lateral canthal reconstruction.The patient was counselled regarding monocular precautions and was advised to wear protective goggles to prevent any inadvertent trauma to the healthy eye.Unfortunately, the patient was lost to follow-up as he had returned to his native country to continue treatment. Discussion Nail gun is a popular tool in the construction industry as it can significantly increase productivity.The firing speed of a nail gun ranges from 45.7m/s (pneumatic nail gun) to 426.7 m/s (powder-actuated tool) [3].The ease and speed of nailing enhances productivity at the cost of increased potential for traumatic injury.TOPI has the tendency to occur in young males, resulting in a high risk of blindness and subsequently increasing our economic burden.Three common routes for TOPI to occur are the optic canal, the superior orbital fissure, and the orbital roof, with the thin and fragile orbital roof being the most frequent route, resulting in frontal lobe contusion [4].TOPIs are mostly caused by missile injuries, gunshot wounds, and shrapnel wounds.Non-missile injuries are often caused by high-velocity sharpened objects, especially metallic materials such as scissors, screwdrivers, knives, or even wooden sticks [5]. Plain skull X-ray in this case confirmed the presence of the nail intracranially.However, plain skull X-ray has a very high failure rate for detecting fractures and certain types of foreign bodies such as wood, plastic, and glass [6].As such, the gold standard imaging modality for initial radiological assessment in TOPI cases is non-contrasted CT [7].CT should be performed as soon as possible because it can identify and localize the foreign bodies, assess extension of the lesion and pathway of penetration, and identify bone fragments and hematoma.This provides essential information for planning patient's management and surgical procedures [7].Three-dimensional CT image of the patient's skull is also a useful adjunct to surgical planning, as it provides a detailed three-dimensional analysis of the bony pathology images, position, and trajectory of the foreign body. Surgical repair of the eyelids in this case had been very challenging due to severe chemosis and swelling of periorbital tissues on top of the severely macerated wounds of the eyelids with tissue loss.Although primary closure of the eyelid lacerations was performed in this case, severe proptosis postoperatively had precluded apposition of upper and lower lids, causing total exposure of the cornea with prolapse of conjunctiva.Therefore, to prevent exposure keratopathy, a moisture chamber was applied round the clock with chloramphenicol ointment as a lubricant.Moisture chamber was created using transparent adhesive film dressing.It protects the ocular surface by acting as a barrier against evaporation of tears, thereby increasing the periocular humidity and tear-film lipid-layer thickness [8].In addition, it can act as a physical barrier to microorganisms and reduce transmission of infection to the eye.The adhesive film dressing was cut to a size sufficient to cover the area from the eyebrow to the cheek vertically and from the nasal bone to the lateral orbital rim horizontally and was replaced daily.Moisture chamber is an effective measure to preserve the corneal surface temporarily while awaiting definitive surgical repair of the eyelids. There are various measures to prevent exposure keratopathy include regular eye toileting, frequent lubrication (ointment and drops), lid taping, transparent hydrogel dressings, and moisture chambers with swimming goggles or polyethylene covers.A systematic review and meta-analysis by Zhou et al. found that moisture chambers are significantly better than lubrication without using a moisture chamber in preventing exposure keratopathy in intensive care unit patients [9].A randomized controlled study by Shan and Min found that the moisture chambers using polyethylene film covers were statistically significantly more likely to prevent exposure keratopathy than artificial tears alone for intensive care patients.Moreover, it also showed that the use of polyethylene film covers was also more time-saving compared to artificial tears alone [10]. Conclusions In conclusion, TOPI is an emergency that requires immediate multidisciplinary workup and surgical intervention.Work-related eye injuries especially TOPI can be debilitating and potentially life-threatening, which, in turn, causes productivity loss and increases economic burden.However, they are largely preventable by increasing awareness and providing adequate education on workplace safety measures.This case highlights the importance of wearing proper personal protective equipment at work, which includes protective eye wear such as safety goggles with polycarbonate lenses and face shields, head protection such as helmets, respiratory protection such as respirators and masks, and hand and body protection such as gloves, vests, aprons, and boots.Adequate personal protective equipment should be provided by the employers, as well as proper training and demonstration on safe handling of the tools for the workers in order to prevent work-related injuries. FIGURE 1 : FIGURE 1: Picture of the patient's left eye at presentation showing lacerated lateral canthus, prolapse of the uveal tissues, extensive subconjunctival hemorrhage, chemosis, and total hyphema. FIGURE 2 :FIGURE 3 : FIGURE 2: Plain X-ray of the skull, anteroposterior (A) and lateral (B) views, showing a metallic foreign body resembling a nail. FIGURE 4 : FIGURE 4: A three-dimensional reconstructed CT of the brain showing the nail penetrated in the left orbital roof and lodged in the frontal lobes.
2024-01-12T16:16:53.805Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "00fdb75367fbaf25c87404cdada21ace6f39d81c", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/218939/20240110-603-4cqbdj.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64f031bd7e4ed271e30d15c0322d0092f6daf5f7", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
250322670
pes2o/s2orc
v3-fos-license
The use of androgen deprivation therapy for prostate cancer and its effect on the subsequent dry eye disease: a population-based cohort study This study aimed to investigate the influence of androgen deprivation therapy (ADT) for the development of dry eye disease (DED) in subjects with prostate cancer via the use of national health insurance research database (NHIRD) of Taiwan. A retrospective cohort study was conducted and patients were selected as prostate cancer with ADT according to diagnostic and procedure codes. Each participant in that group was then matched to one patient with prostate cancer but without ADT and two subject s without prostate cancer and ADT. And a total of 1791, 1791 and 3582 participants were enrolled in each group. The primary outcome was set as the DED development according to the diagnostic codes. Cox proportional hazard regression was applied to calculate the adjusted hazard ratio (aHR) and 95% confidence interval (CI) of ADT and other parameters for DED development. There were 228, 126 and 95 new events of DED developed in the control group, the prostate cancer without ADT group and the prostate cancer with ADT group. The rate of DED in the prostate cancer with ADT group (aHR: 0.980, 95% CI: 0.771-1.246, P= 0.8696) and Prostate cancer without ADT group (aHR: 1.064, 95% CI: 0.855-1.325, P= 0.5766) were not significantly different compared to the control group. In addition, the patients aged 70-79 years old demonstrated a significantly higher incidence of developing DED compared to those aged 50-59 years old (aHR: 1.885, 95% CI: 1.188-2.989, P= 0.0071). In conclusion, the use of ADT did not alter the incidence of subsequent DED. Introduction The prostate cancer is a prevalent cancer in male population [1], with more than 1,400,000 new cases of prostate cancer and 370,000 related deaths were reported in 2020 globally [2]. About the treatment of ADT, the androgen deprivation therapy (ADT) has been used as a common therapy that can reduce the prostate function and suppress the progression of prostate cancer [1,3,4]. The treatment options of ADT in prostate cancer include the LHRH agonists, estrogens, antiandrogens, and orchiectomy [5]. The median survival duration for prostate cancer was about 14 years under the ADT management [6], and the early use of ADT showed certain benefits for patients with prostate cancer and nodal metastases [7]. Several complications had been reported after the ADT management [8]. The cardiovascular disorders are common complications after the ADT Ivyspring International Publisher arrangement [8,9]. According to one research, the subjects received ADT were correlated to higher incidence of ischemic stroke and coronary arterial diseases [8]. Besides, the rate of sudden cardiac death was significantly higher in patients received the ADT [10]. In addition to the above disorders, the ADT is associated with the development of deep vein thrombosis [11]. There were some other complications after ADT which include the cognitive decline, anemia, osteoporosis, depression and diabetes mellitus (DM) [12][13][14]. The hormone status, like the level of growth factor and estrogen, are known to influence the ocular condition [15,16]. The dry eye disease (DED) is a multifactorial disorder that features with tear film dysfunction and ocular surface damage [17]. According to previous experience, the aromatase inhibitor therapy would result in DED symptoms [18], and the use of 5α-Reductase inhibitor finasteride would also contribute to androgen deficiency DED [19]. About other experiences between androgen deficiency status and DED, one study demonstrated the protective effect of androgen on DED while another randomized controlled double-masked study showed insignificant correlation between the androgen level and DED development [20,21]. Consequently, additional long-term research may be conduct to survey this issue more clearly. The purpose of the current study is to investigate the possible relationship between the ADT and subsequent DED via the application of the national health insurance research database (NHIRD) of Taiwan. In addition to ADT, other potential risk factors for DED occurrence were also evaluated in the statistical analysis. Data source Our retrospective cohort study adhered to the declaration of Helsinki in 1964 and its later amendment, and the current study was approved by both the Institutional Review Board of Chung Shan Medical University (Project identification code: CS1-20108), and the National Health Insurance Administration. Moreover, the need of informed consent from subjects was waived by the two institutions. NHIRD of Taiwan contains the claimed data of health insurance service for nearly all Taiwanese that means about 23 million individuals. The interval of NHIRD ranged from January 1, 2000 till December 31, 2018, and the data available from NHIRD include the International Classification of Diseases, Ninth Revision (ICD-9) diagnostic code, International Classification of Diseases, Tenth Revision (ICD-10) diagnostic codes, demographic data, examination code, code of procedure and international ATC codes for all medications. In our study, we used the longitudinal health insurance database (LHID) 2005 version, which is one of the sub-databases from NHIRD, for all the analyses. In LHID 2005, approximately two million patients were randomly selected from the NHIRD at the year of 2005, and these individuals were followed as the same time period as in the NHIRD. Patient Selection Men aged from 40 to 100-year-old who received ICD-9 or ICD-10 diagnostic codes of prostate cancer and experienced aromatase inhibitors, LHRH agonists, antiandrogens, estrogens or bilateral orchiectomy (according to procedure/ATC codes) were included in the prostate cancer with ADT group. The exclusion criteria included blindness, ocular tumor, eyeball removal procedure, severe ocular trauma, DED development or death before index date, ADT prior to prostate cancer diagnosis and prostate cancer developed before 2001 (n=572). The index date was defined as six months after the starting of ADT. Then each subject with prostate cancer and ADT was matched to one prostate cancer participant without ADT and two non-prostate cancer patients. If a prostate cancer patient with ADT cannot be matched to individuals in other two populations, that person would be discarded. The match method is propensityscore matching (PSM) with age and socio-economic status, and the non-prostate cancer population constituted the control group. In our study, 1,791, 1,791 and 3,582 patients were enrolled in the prostate cancer with ADT group, prostate cancer without ADT group and the control groups. Main Outcome Measurement The primary outcome is the development of DED which defined as (1) the diagnosis of DED based on the corresponded ICD-9 and ICD-10 diagnostic codes, (2) the arrangement of fluorescein test or Schirmer's test before the diagnosis of DED, and (3) the DED was diagnosed by an ophthalmologist. To survey the possible correlation between the ADT and DED, only the DED developed after the index date was defined as the achievement of the primary outcome in the current study. Demographic and Co-morbidity Variables To let the general status of our study population more homogenous, the effects of the following parameters were included in the multivariable analysis: age, urbanization, occupation, hypertension, diabetes mellitus (DM), coronary arterial disease (CAD), acute myocardial infarction (AMI), hyperlipidemia, cerebrovascular disease and dementia. The existence of these parameters was according to related ICD-9 and ICD-10 diagnostic codes for all the diseases. Besides, the CAD referred to those with chronic ischemic heart disease according to ICD-9 and ICD-10 diagnostic codes. All participants were followed longitudinally since the index date to the date of DED diagnosis, quit from the National Health Insurance program, or the end of NHIRD interval, which also known as the 31 December, 2018. Statistical Analysis SAS version 9.4 (SAS Institute Inc, NC, USA) was used for all the statistical analyses. After the PSM method, we used descriptive analysis to show the baseline characters of the three groups. The Poisson regression was used for the incidence rate of DED with corresponding 95% confidence interval (CI) among the groups. Then Cox proportional hazard regression was applied to estimate the crude as well as the adjusted hazard ratio (aHR) of DED among the three groups which considered the possible effects of the demographic data and systemic diseases in our multivariable analysis. Besides, Cox proportional hazard regression was also used to evaluate the effect of each parameter on the development of DED and presented as aHR with 95% CI. In the next step, we made the Kaplan-Meier curves to illustrate the cumulative probability of DED among the prostate cancer with ADT group, prostate cancer without ADT group and the control group, then the log rank test was used to investigate whether significant difference exist among the three survival curves from different groups. The threshold of statistical significance was set at P < 0.05. Results The baseline characters of the study population are shown in Table 1. The distribution of age, urbanization and occupation were similar among the three groups due to PSM process. Moreover, the rate of systemic co-morbidities were also statistical insignificant among the three groups although a numerically higher rate of systemic diseases was found in the prostate cancer with ADT group. For the type of ADT, the antiandrogens therapy was the most commonly used ADT which 67.67 percent of patients received such management, while 61.86 percent, 11.28 percent and 7.82 percent of subjects received LHRH agonists, bilateral orchiectomy and estrogen therapy, respectively (Table 1). There were 228, 126 and 95 new cases of DED occurred in the control group, the prostate cancer without ADT group and the prostate cancer with ADT group, respectively. In the Cox regression analysis, the incidence of DED in the prostate cancer with ADT group (aHR: 0.980, 95% CI: 0.771-1.246, P= 0.8696) and Prostate cancer without ADT group (aHR: 1.064, 95% CI: 0.855-1.325, P= 0.5766) were not significantly different compared to the control group (Table 2). Besides, the cumulative probabilities of DED development were similar among the three groups at different time point (P= 0.1413) (Figure 1). In the analysis of different parameters, the patients aged 70-79 years old showed a significantly higher risk of developing DED compared to those aged 50-59 years old (aHR: 1.885, 95% CI: 1.188-2.989, P= 0.0071). The other parameters, including the demographic data and systemic disorders, did not demonstrated significant influence on the occurrence of DED (all P> 0.05) ( Table 3). Discussion Briefly, the current study showed the insignificant effect of ADT on the development of DED in patients with prostate cancer. In addition, the cumulative probability of DED among different patient groups did not reveal significant difference with time. On the other hand, the age between 70 to 79 years old demonstrated a prominent influence on the development of DED which served as an independent risk factor. The formation DED is thought to be multifactorial while the inflammatory reaction is the major mechanism according to the literatures conducted recently [17,22,23]. In the report published by the Dry Eye Workshop, the development of DED is due to the vicious cycle the damage the ocular surface [22]. As the tear film became instable, the osmorlarity of the tear film would increase which can be exaggerated by the presence of meibomian gland dysfunction [22]. Then the inflammatory cytokine like the interleukin and tumor necrosis factors were released and cause damage to the goblet cell as well as corneal epithelium, resulting in unstable tear film [22]. Consequently, the disorder that could induce inflammatory reaction owns the chance to elevate the risk of DED development [24]. Some autoimmune diseases were associated with the DED occurrence in previous studies, which included the Sjogren syndrome, rheumatoid arthritis, systemic lupus erythematous and gout arthritis [25][26][27][28]. On the other hand, the change of hormone status can also lead to the production of inflammation cytokine [29]. In previous study, the estrogen is associated with the elevation of interleukins and reactive oxygen species [30]. Besides, the relationship between androgen and the suppression of inflammatory reaction had been established [31,32]. However, there was no strong correlation between the androgen deficiency and the autoimmune disease, which indicated that the elevation of inflammatory process is not always cause inflammation-related disease. Moreover, the androgen deficiency status did not cause lacrimal gland inflammation in experimental study [33]. Since DED is correlated to several inflammatory processes and ADT could alter the inflammation reaction [19,24], the potential effect of ADT on DED development should be surveyed while the results of the current study demonstrated an insignificant association between the ADT and DED. The relationship between ADT and DED has not been established firmly in previous researches [19][20][21]34], while the result of the current study illustrated a minimal influence of ADT on the subsequent DED. About the two studies that showed a significant effect of ADT on DED, one was experimental studies which used DED model to survey the potential relationship between androgen deficiency and DED [19]. Another prospective study that supported the association between DED and androgen recruited only 50 participants, and they concluded that the application of androgen transdermal device can decrease the severity of DED [21]. In the current study, we enrolled approximately 7 thousands participants in the whole study population and the follow up period can up to 18 years. Furthermore, the current study enrolled multiple parameters in the analysis model to erase the effect of possible confounders thus the results may be more reliable compared to the researches that evaluate the relationship between androgen deficiency and DED but without considering the influence of other factors [19,21]. On the other hand, the cumulative probability of DED in the prostate cancer with ADT group did not elevate throughout the study interval compared to the prostate cancer without ADT group and the control group, which may indicates the long-term application of ADT did not increase the incidence of DED compared to non-ADT user. Concerning the other parameters that may contribute to the development of DED, the age range from 70 to 79 years old showed a significantly higher rate of DED occurrence compared to those aged 50 to 59 years old. The age is a well-established risk factor for DED development [35]. And about the parameters of DED, older age is correlated to shorter tear break-up time and ocular surface stains compared to younger individuals [36]. In the current study, the significant correlation of old age to DED development compared to the younger population was compatible to previous experience. However, the patients aged 80 years or older did not reveal significantly higher incidence of DED compared to those aged 50 to 59 years old. There are two possible explanations for the conflicting results. Firstly, the patients older than 80 years old may become more disable and thus would not visit the ophthalmic department as easy as their younger counterpart [37], thus the diagnostic rate of DED could be reduced. Another possible reason is because the visual display terminal is another prominent risk factors for DED [38], and patients aged more than 80 years old might not use these device commonly according to clinical experience. The other parameters did not show significant effect of the development of DED. Although DM was associated with impaired corneal epithelial wound healing [39], the influence of this corneal injury may not induce persistent ocular inflammation and following DED. About the epidemiology aspect, the DED is a prevalent disease in the elderly population [35]. In an epidemiological research, the prevalence of DED was about 11.3 percent in the population older than 50 years [40]. Although the female is more vulnerable to the DED, the prevalence of DED in the male population still reached 5.65 percent in that study. [40] On the other hand, the prostate cancer is one of the most common cancers in the elderly male population [41,42]. According to a previous research, the prevalence of prostate cancer is above 30 per 100000 male in Asian region [43]. Moreover, the ADT was applied in nearly all the prostate cancer individuals [1]. Because both DED and prostate cancer affect a majority of elderly male population and ADT is widely applied in those with prostate cancer [1,40], the importance to investigate whether ADT is related to following DED occurrence cannot be overemphasized. There are some limitations in the current study. First, the retrospective design of the current study and the nature of claimed-data research will diminish the homogeneity and the accuracy of the current study. Second, we can only know the patient received DED-related exams and ADT, while the severity and treatment outcome of both prostate cancer and DED cannot be obtained in the NHIRD/LHID. Besides, we did not analyze the effect of different ADT on DED separately because many participants in the current study received more than one type of ADT. Also, more than half of patients with prostate cancer and received ADT management were excluded in the matching process which may reduce the statistical power. Nevertheless, since we want to ensure the homogeneity among different groups and the case numbers in the current study is not inferior to previous studies that survey the ADT [14,44], the influence of this limitation may not be prominent. In conclusion, the application of ADT did not cause higher incidence of subsequent DED either in short-term or long-term utilization. Furthermore, old age is still a risk factor for DED development especially in those aged 70-79 years old. Consequently, the use of ADT may be safe even in those with predisposing factors for DED. Further large-scale prospective study that evaluates whether the use of ADT will affect the therapeutic outcome of DED is mandatory.
2022-07-07T15:05:59.795Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "f906f0123f8a654448620eef5a4e5e7e5d41867c", "oa_license": "CCBYNC", "oa_url": "https://www.medsci.org/v19p1103.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0e630a4d4c9b976b4ad056bcf0b61704f7d3e593", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261065209
pes2o/s2orc
v3-fos-license
Strain-Induced Polarization Enhancement in BaTiO$_3$ Core-Shell Nanoparticles Despite fascinating experimental results, the influence of defects and elastic strains on the physical state of nanosized ferroelectrics is still poorly explored theoretically. One of unresolved theoretical problems is the analytical description of the strongly enhanced spontaneous polarization, piezoelectric response, and dielectric properties of ferroelectric oxide thin films and core-shell nanoparticles induced by elastic strains and stresses. In particular, the 10-nm quasi-spherical BaTiO3 core-shell nanoparticles reveal a giant spontaneous polarization up to 130 mu_C/cm2, where the physical origin is a large Ti off-centering. The available theoretical description cannot explain the giant spontaneous polarization observed in these spherical nanoparticles. This work analyzes polar properties of BaTiO3 core-shell spherical nanoparticles using the Landau-Ginzburg-Devonshire approach, which considers the nonlinear electrostriction coupling and large Vegard strains in the shell. We reveal that a spontaneous polarization greater than 50 mu_C/cm2 can be stable in a (10-100) nm BaTiO3 core at room temperature, where a 5 nm paraelectric shell is stretched by (3-6)% due to Vegard strains, which contribute to the elastic mismatch at the core-shell interface. The polarization value 50 mu_C/cm2 corresponds to high tetragonality ratios (1.02 - 1.04), which is further increased up to 100 mu_C/cm2 by higher Vegard strains and/or intrinsic surface stresses leading to unphysically high tetragonality ratios (1.08 - 1.16). The nonlinear electrostriction coupling and the elastic mismatch at the core-shell interface are key physical factors of the spontaneous polarization enhancement in the core. Doping with the highly-polarized core-shell nanoparticles can be useful in optoelectronics and nonlinear optics, electric field enhancement, reduced switching voltages, catalysis, and electrocaloric nanocoolers. The 10-nm quasi-spherical BaTiO3 core-shell nanoparticles reveal a giant spontaneous polarization up to 130 C/cm 2 at room temperature, which is five times greater than the bulk value 26 C/cm 2 (see Refs. [6,11,12] and references therein). The incorporation of these nanoparticles can have multiple benefits in various applications, such as enhanced beam coupling efficiency [13], reduced switching voltages/DC bias [14,15], ionic contamination elimination [16], and catalysis [17]. Experimental realizations of quasi-spherical BaTiO3 ferroelectric nanoparticles are abundant, and the sizes of (5 -50) nm are typical experimental values [18,19,20,21]. The nanoparticles embedded in heptane and oleic acid produce core-shell nanoparticles, where the oleic acid is transformed into an organic crystalline (metal carboxylate) shell surrounding the inorganic BaTiO3 core resulting from mechanochemical synthesis during the ball-milling process [22]. The metal carboxylate coating/shell around the BaTiO3 core can be in two formsone is crystalline and provides a lattice mismatch at the core-shell interface, and the other is non-crystalline without mismatch conditions [6]. The observed polarization enhancement is possible for the BaTiO3 core with the crystalline shell. It is worth noting that ferroelectric nanoparticles with such giant spontaneous polarization values are not always achievable from the ball-milled mixture without additional processing. Typically, the harvesting technique described in Ref. [23] is required, which relies on an electric field gradient to selectively harvest ferroelectric nanoparticles with the strongest dipole moments from bulk nanoparticle ball-milled mixtures. The total nanoparticle yield of strong dipoles using the ball-milling/mechanochemical synthesis technique has varied from nearly 0% to ∼100% [23], while the harvesting technique has shown to repeatedly provide 100% usable strong dipoles from these mixtures. The harvesting technique was described by Yu. Reznikov in Ref. [24] as being a "breakthrough" in solving the problem of irreproducibility. Alternatively, a process of separation via centrifuge or simple sedimentation, where larger particles drop and form agglomerates and small particles remain in suspension, has also shown to provide strong dipoles. Although this latter method has provided the strong dipole particles used in the Ref. [6,7], its effectiveness and reliability compared to the proven harvesting technique has not been determined. The physical origin of the giant spontaneous polarization in the quasi-spherical BaTiO3 coreshell nanoparticles remained a mystery for a long time, until recent X-ray spectroscopic measurements [7] revealed a large Ti-cation off-centering in 10-nm nanoparticles near 300 K confirmed by the tetragonality ratio / ≈ 1.0108 (in comparison with / ≈ 1.0075 for 50 nm nanoparticles). The off-centering of Ti-cations is a key factor in producing the enhanced spontaneous polarization in the nanoparticles. Sharp crystalline-type peaks in the barium oleate Raman spectra suggest that this component in the composite core-shell matrix, a product of mechanochemical synthesis, stabilizes an enhanced polar structural phase of the BaTiO3 core. To the best of our knowledge, there is no available theoretical description that can explain the giant spontaneous polarization repeatedly observed in the BaTiO3 core-shell nanoparticles. Indeed, the surface bond contraction mechanism can only decrease the polarization of the ferroelectric ABO3-type perovskite nanoparticles [25,26]. Some theoretical papers [27,28,29] predict the enhancement of a reversible spontaneous polarization in prolate nanoellipsoids, nanorods, and nanowires of ABO3-type perovskites, when their polarization is directed along the longest axis. A significant polarization enhancement can also appear due to the high positive surface tension coefficient and negative linear electrostriction coupling coefficients 12 ; the dependence of the Curie temperature on the particle radius R is proportional to the positive value − 4 12 (see e.g., Table 1 in Ref. [30]). Furthermore, this same mechanism leads to a significant reduction of the Curie temperature and spontaneous polarization in spherical BaTiO3 nanoparticles specifically, because the value − 2 (2 12 + 11 ) is negative, since the condition > 0 is required for the surface equilibrium and 2 12 + 11 > 0 for BaTiO3. The flexo-chemical effect [31] emerging from the joint action of the Vegard stresses and flexoelectric effect, can increase , spontaneous polarization, and / in ultra-small (5 nm or less) spherical BaTiO3 nanoparticles and explain experimental results [5], although the effect rapidly disappears with a radius increase (~1 2 ) and requires very high values of the flexoelectric coupling and intrinsic strains. The Vegard strains ( ) can significantly increase in spherical KTa1-xNbxO3 nanoparticles with < 30 nm [30], as well as in the BaTiO3 core-shell nanoparticles with < 10 nm (see Fig. 9 in Ref. [32]); however, the influence of strong Vegard strains (i.e., , >0.5%) on the BaTiO3 spontaneous polarization was not studied in Ref. [32]. Nonlinear electrostriction coupling, which needs to be considered for strains higher than 1%, induces an instability of the 6-th order BaTiO3 thermodynamic potential, because a higher strain changes the positive sign of the 6-th order polarization term at temperatures well below 350 K. Note, the nonlinear electrostriction coupling can be very important for a correct description of polar properties of strained ferroelectric thin films [ 33 , 34 ] and core-shell nanoparticles [35,36]. All aforementioned and many other theoretical works considering BaTiO3 nanoparticles are based on the Landau-Ginzburg-Devonshire (LGD) phenomenological approach, which includes the 2-nd, 4-th, and 6-th powers of polarization in the LGD free energy expansion and only considers linear electrostriction coupling (see e.g., Ref. [32] and references therein). This work analyzes polar properties of core-shell BaTiO3 nanoparticles using LGD free energy functional proposed by Wang et al. [37], which includes the 8-th power of polarization, and thus allows high Vegard strains in the shell and the nonlinear electrostriction coupling in the core to be considered. A. The problem formulation Let us consider a spherical BaTiO3 core-shell nanoparticle in the tetragonal phase, whose core of radius is a single-domain with a spontaneous polarization ⃗⃗ directed along one of the crystallographic directions (e.g., along the polar axis X3). The crystalline core has a perfect structure (without any defects) and is considered to be insulating (without any free charges). The core is covered with a crystalline shell of thickness Δ and outer radius . The shell is semiconducting and paraelectric due to the high concentration of free charges and elastic defects. The free charges provide an effective screening of the core spontaneous polarization and prevent domain formation. The elastic defects induce strong Vegard strains, , which are regarded as cubic, = , where is the Kronecker-delta symbol and is the magnitude of Vegard strains in the shell, where as a rule, cannot exceed (5 -10)%. These strains can stress the core due to the elastic mismatch at the core-shell interface. The effective screening length in the shell, λ, is small (less than 1 nm), and its relative dielectric permittivity tensor, , is isotropic, = , which can be very high (10 2 -10 3 ) as anticipated for the paraelectric state. The core-shell nanoparticle is placed in a dielectric medium (polymer, gas, liquid, air, or vacuum) with an effective dielectric permittivity, e. The core-shell geometry is shown in Fig. 1. FIGURE 1. A spherical core-shell nanoparticle: a ferroelectric core of radius is covered with a paraelectric shell of thickness ∆ , which is full of elastic defects and free charges. The nanoparticle is placed in an isotropic dielectric medium; , , and are the core background, shell, and surrounding media dielectric permittivities. The LGD free energy density includes the Landau-Devonshire expansion in even powers of the polarization 3 (up to the 8-th power), the Ginzburg gradient energy, and the elastic and electrostriction energies, which are listed in Appendix A1 of Supplementary Materials [38]. The equilibrium polarization distribution in the core follows from the Euler-Lagrange equation, which in turn follows from the minimization of the LGD free energy, and has the form: Here the parameters , , , and are the 2-nd, 4-th, 6-th, and 8-th order Landau expansion coefficients in the 3 -powers of the free energy corresponding to the bulk BaTiO3. The values denote diagonal components of a stress tensor in Voigt notation, and subscripts , = 1 − 6. The values 3 , 33 , and 3 are the components of a single linear and two nonlinear electrostriction strain tensors in Voigt notation, respectively [39,40]. The values 33 are polarization gradient coefficients in matrix notation, and subscripts , = 1 − 3. The Neumann boundary condition for 3 at the nanoparticle surface S is 33 ∂ 3 | = 0, where ⃗ is the outer normal to the surface. These conditions are also called "natural", because corresponding surface energy is zero in the case. The value 3 is an electric field component, co-linear with the polarization 3 , which is a superposition of external and depolarization fields, 3 0 and 3 , respectively. The quasi-static field 3 is related to the electric potential as 3 = − 3 . The potential satisfies the Poisson equation Table AI). The coefficient depends linearly on the temperature , ( ) = ( − ), where = 381 K is the Curie temperature. Also, the coefficients and linearly depend on the temperature and can change their sign. Since = 0, the 2-4-6 LGD functional becomes unstable above 445 K, when becomes negative. This is very inconvenient for the modeling of strongly stressed nanoparticles, because elastic stresses above 1% can reduce the instability temperature (room or lower), making the 2-4-6 LGD free energy functional unsuitable for the modeling of strongly stressed nanoparticles. 2) More rarely used is the "2-4-6-8 LGD" free energy functional, which includes the 2-nd, 4-th, 6-th, and 8-th powers of the polarization 3 in the Landau-Devonshire free energy without consideration of the nonlinear electrostriction coupling effect, i.e., = 0 and = 0 (see the third column in Table AI). For this case, the coefficient also depends linearly on the temperature , ( ) = ( − ) , where = 391 K. The coefficients and linearly depend on the temperature and can change sign, but the coefficient is positive and temperature-independent. Since > 0, the 2-4-6-8 LGD free energy functional is stable for arbitrary temperatures, and thus is suitable for artifact-free modeling of strongly stressed nanoparticles. The most important part of this work is to study how the nonlinear electrostriction coupling [41], we can assume that the possible range of variation can be even wider in the core-shell BaTiO3 nanoparticles. These speculations give us some grounds to vary within the range from -1.5 m 8 /C 4 to +1.5 m 8 /C 4 to look for optimal values that correspond to the highest spontaneous polarization and the best related properties. B. Approximate analytical description For the case of natural boundary conditions used in this work, 33 ∂ 3 = 0, small , and relatively large gradient coefficients | | > 10 −11 C -2 m 3 J, the polarization gradient effects can be neglected in a single-domain state, which reveals to have a minimal energy in comparison to polydomain states. The field dependence of a quasi-static single-domain polarization can be found from the following equation: * The depolarization field, 3 , and stresses, , contribute to the "renormalization" of coefficient ( ), which becomes the temperature-, radius-, stress-, and screening length-dependent function * [36]: * ( , , ) = ( ) + 1 0 ( +2 + ⁄ ) − (2 3 + 3 ). The derivation of the second term in Eq.(3a) is given in Ref. [42]. Here, λ = λ( 3 ) can be a rather small value (less than 0.1 -1 nm) due to free charges and surface band bending in the shell. In the right-hand side of Eq.(2), 3 is the static external field inside the core, for which the estimate 3 ≈ 3 3 0 +2 + ⁄ is valid. If λ( 3 0 ) ≫ and ~, the field in the core is of the same order as the applied field 3 0 (see details in Ref. [36]). Elastic stresses in the core, , induced by the Vegard strains in the shell, can be calculated analytically using the method of successive approximations. When the spontaneous polarization is absent in the paraelectric phase of the core, or small in the "shallow" ferroelectric state located near the paraelectric-ferroelectric transition, the core can be considered as elastically-isotropic due to its cubic symmetry or very small tetragonality related with the small electrostriction contribution. In Appendix A2 an approximate expression for the diagonal stresses in the core and shell is derived. The core stresses, further denoted as , are given by the expression: Here, and are the elastic compliances of the core and shell, respectively; = ( 11 + 2 12 )/3; = ( 111 + 2 211 )/3; and = ( 111 + 2 112 + 2 123 + 4 122 ) 3 ⁄ are isotropic parts of the linear and nonlinear electrostriction tensors of the core, and is the Vegard strain in the shell. The electrostriction coupling can exist in the shell; however, it does not contribute to the solution (4a) for small , since the electric field is very small in the shell due to the high screening degree. The nondiagonal stresses are absent, 4 = 5 = 6 = 0. The corresponding free energy of the core-shell nanoparticle is: Since inequality 11 − 2 12 > 0 follows from the mandatory condition of a positive quadratic-form of the elastic energy and 3 − 3 > 0, the denominators in Eq.(4) are always positive. The expressions (4) are accurate enough to provide a first approximation for the description of the core in a "deep" ferroelectric phase, when the absolute value of the polarization-dependent anisotropic contribution to the total strain is much smaller than other contributions (see Appendix A2 for details). The approximation imposes definite conditions on poorly known (or unknown) anisotropic parts of the tensors and . In order to avoid complications, the tensors are regarded as isotropic. The condition ≥ 0 should be valid for the free energy stability at high 3 , and this condition is assumed hereinafter. If the term 3 2 is small and positive, Eq.(4a) becomes much simpler for two important cases: 1) when elastic compliances of the core and the shell are the same: = ≡ , and 2) for shells with ∆ ≪ . In these cases: After substitution of the solution (5) in Eq.(2) and elementary transformations (see Appendix A3 for details), the equation for polarization 3 with renormalized coefficients , , , , and is derived as: The renormalized coefficients are given by expressions: The renormalization of the coefficients in Eq.(6) is proportional to the ratio 3 − 3 3 3 , which is close to ∆ for thin shells with ∆ ≪ . The renormalization is most significant for small nanoparticles with ∆~; it vanishes for → and is absent for thin films, where the curvature disappears, and tend to infinity, and their finite difference ∆ becomes the film thickness. By definition, the tetragonality / is proportional to the ratio: where 3 and 1 are the core (denoted by the superscript c) strains written in Voigt notations. Because the core strains are small, | 3 | ≪ 1 and | 1 | ≪ 1, the tetragonality ratio is proportional to their difference, ≈ 1 + 3 − 1 . Since the tensors of nonlinear electrostriction coupling are assumed to be isotropic, their contribution to the 3 and 1 are the same, namely 3 = 1 + 11 3 2 and 1 = 1 + 12 3 2 , where the function 1 is given by Eq.(A.14a) in Appendix A2. Therefore, the deviation of the tetragonality ratio from unity is proportional to the anisotropy of linear electrostriction coefficients of the core: III. RESULTS AND DISCUSSION Numerical results presented in this section are obtained and visualized in Mathematica 13.2 [43]. The spontaneous polarization calculated using the 2-4-6 LGD free energy, where = 0 and = 0, is shown in Fig. 2(a) -2(d). appears at some critical temperature-and sizedependent strain, denoted as , and monotonically increases with an increase in to the 6%. The magnitude of , which corresponds to = 293 K and a maximal strain ( ) of 6%, does not exceed 35 µC/cm 2 for = 5 nm and 30 µC/cm 2 for = 50 nm. For the case of small radii (5 and 10 nm in Figs. 2(a,b)), the magnitude of increases up to (80 -85) µC/cm2 at temperatures above 440 K (see the red region in the top right corner in the left column of Fig. 2), which is much higher than the bulk value for 293 K (26 µC/cm 2 ). Such an increase of is unphysical, because the temperature increase must weaken and eventually destroy the long-range order. The reason of the unphysically large is because the 2-4-6 LGD free energy becomes unstable at temperatures above 440 K, and large tensile Vegard strains shift the instability temperature to lower temperatures; therefore, these large magnitudes of look artificial and disagree with available experimental data. This strongly suggests that calculating the spontaneous polarization using the 2-4-6 LGD free energy is not practical. The spontaneous polarization calculated using the 2-4-6-8 LGD free energy, where = 0 and = 0, is shown in Fig. 2(e) -2(h). The magnitude of , which corresponds to = 293 K and maximal = 6%, does not exceed 30 µC/cm 2 for = 5 nm and 25 µC/cm 2 for = 50 nm. The polarization decreases with a temperature increase (e.g., is smaller than 5 µC/cm 2 for 500 K, = 25 nm, and = 4%), which is physical. The polarization increases up to 30 µC/cm 2 with the temperature decrease to 100 K (see the red region in the bottom right corner in the right column of Fig. 2), which is reasonable because the lowering temperature supports long-range order. Thus, the 2-4-6-8 LGD free energy being stable, can be used for a better description of BaTiO3 core-shell nanoparticles at arbitrary temperatures and high Vegard strains. However, the magnitude of does not exceed 35 µC/cm 2 which is too small in comparison with the experimentally observed giant values that exceed 120 µC/cm 2 [6,11]. Next, we study how the nonlinear electrostriction coupling, under the condition LGD free energy start to increase for temperatures above 400 K, and, in particular, becomes larger than 1.1 when the temperature increases above 440 K. Since − 1 ≈ 3 − 1 ≈ ( 11 − 12 ) 2 in accordance with Eq. (7), the increase of 1 , 3 , and for >400 K is a direct consequence of the spontaneous polarization increase for temperatures >400 K (as shown in Fig. 3(a)-(d)). Since the unphysical increase of 2 happens for >400 K due to the 2-4-6 LGD free energy inapplicability for temperatures above 440 K, one cannot trust the increase of 1 , 3 , and for >400 K, and thus we do not show the figure for the strains and tetragonality ratio calculated using 2-4-6 LGD free energy in this work. The spontaneous polarization value 50 C/cm 2 corresponds to tensile core strains, 1 and 3 , as high as (3 -6) % (see Figs. 4(a)-4(d)) and tetragonality ratios as high as 1.02 -1.04 (see Fig. 4(e)-4(f)), although this magnitude of the spontaneous polarization is nowhere near the large experimentally measured values. A spontaneous polarization larger than 50 C/cm 2 can be reached by the application of higher tensile strains and/or high positive intrinsic surface stresses (note, compressive stains do not contribute to an increase in polarization), however, this would lead to unphysically high tetragonality ratios. In particular, a 27% strain difference (i.e., a tetragonality ratio as high as 1.27) would be required to match the experimentally observed polarization value of 130 C/cm 2 [6], which seems physically impossible. This means that physical mechanisms, other than the Vegard and/or mismatch strains, linear and nonlinear electrostriction couplings, which are not considered here, are responsible for the polarization enhancement higher than 100 C/cm 2 . tetragonality ratios. It also should be noted that the polarization of core-shell nanoparticles can depend critically on the preparation method: ball-milled nanoparticles reveal ≅130 C/cm 2 , while much smaller values are measured in non-milled nanoparticles [6,11]. The discrepancy of more than a factor of two between LGD models and experiments [6,11] may be explained by other factors not considered in this work. It could be the dependence of the linear electrostriction coefficients and elastic modulus on the preparation methods of the nanoparticles or post fabrication techniques. However, recent work reveals an extraordinarily high electrostriction due to interface effects [41]. As a rule, the influence of the interface is important near the surface (i.e., several nm from the surface); but the scale may be greater for milled nanoparticles of (5 -10) nm radius, which have a quasi-cubic or irregular shape, because of their evolved surfaces and corners that contribute to the formation of inhomogeneous internal strains. Note that the elastic mismatch at the core-shell interface and internal strains can also influence the penetration depth of the surfaceinduced electrostriction: as a rule, the stronger the mismatch and/or strains, the greater the depth can be. Another possibility may be a negative extrapolation length in the boundary conditions for polarization, which would support the surface-induced polarization enhancement. In this work, we imposed natural boundary conditions, which correspond to the infinite extrapolation length. However, the effect of the extrapolation length can be very "short-range" [44], meaning that the polarization enhancement induced by the negative extrapolation length would be significant only in a thin sub-surface layer, as thin as (3 -10) lattice constants. Below we will show that, due to the polarization enhancement, the Vegard strains can improve the electrocaloric properties of a core-shell ferroelectric nanoparticle, which can be important for applications such as nanocoolers. The relative electrocaloric (EC) temperature change, Δ = − , can be calculated from the expression [45]: where is the volume density and is the specific heat of the nanoparticle core; is the ambient temperature, is the temperature of the ferroelectric core measured in adiabatic conditions; 1 and 2 are the values of the quasi-static electric field 3 applied to the nanoparticle in adiabatic conditions; and coefficients = , = , and = ( , , and are introduced in Eqs. (6)). Note, that we are especially interested in reaching a maximal negative Δ < 0 corresponding to EC cooling of the nanoparticle, which is required for nanocooler-based applications. To reach the maximal Δ < 0, it is necessary to set 1 = 0 and 2 = in Eq.(8a), where is the coercive field of the core-shell nanoparticle, and determine that the greater The specific heat depends on polarization for ferroelectrics and can be modeled as following [46]: where 0 is the polarization-independent part of specific heat and g is the density of the LGD free energy (A.2). According to experimental results, the specific heat usually has a maximum at the point (i.e., coordinates of temperature and radius/strain) of the first order ferroelectric phase transition, and the maximum height is about (10 -30) % of the Cp value near TC (see e.g., Ref. [47]). For estimates of the EC temperature change, we assume that the mass density and heat-capacitance of the BaTiO3 are = 6.02 • 10 3 kg/m 3 and = 4.6 • 10 2 J/(kg K), respectively. The calculations of the EC response are performed over a wide temperature range, (250 -400) K; for a range of core radii, 5 < < 50 nm, for which the Vegard strains are pronounced. The nanoparticle core must be in the ferroelectric state to produce any noticeable EC cooling (e.g., Δ < −2 K); when the core is in the paraelectric state, only a weak EC heating (e.g., 0 < Δ < 2 K) is possible [45]. The spontaneous polarization, , and the electrocaloric temperature change, Δ , as a function of the core radius and Vegard strain , are shown in Fig. 5 The change in the sign of the EC effect with an increase in the spontaneous polarization is related to the features of the LGD free energy for BaTiO3, where not only the coefficient at the 2-nd power of polarization depends on temperature, but also the coefficients at the 4-th and 6-th powers. According to Eq.(8a), this leads to several contributions to the electrocaloric effect, which are proportional to the corresponding polarization powers. Since the coefficient is negative, the EC effect changes sign for large values of the polarization. K (a, d), 338 K (b, e), and 388 K (c, f). Color scale corresponds to in µC/cm 2 and Δ in K. Other parameters are the same as in Fig. 2. IV. CONCLUSIONS This work provides a systematic analytical description of the polar properties of core-shell BaTiO3 nanoparticles using the 2-4-6-8 Landau-Ginzburg-Devonshire free energy functional, which considers nonlinear electrostriction coupling and large Vegard strains in the shell. We revealed that the spontaneous polarization, as high as 50 C/cm 2 , can be stable in the BaTiO3 core with radius 5 -50 nm at room temperature, if a 5-nm paraelectric shell is stretched (3-6)% by the Vegard strains. We can conclude that the nonlinear electrostriction coupling in the core and tensile Vegard strains in the shell are key physical factors of a spontaneous polarization enhancement. The polarization 50 C/cm 2 corresponds to tetragonality ratios as high as (1.02 -1.04). The application of higher strains and/or surface stresses would lead to unphysically high tetragonality ratios. In particular, the experimentally observed polarization 130 C/cm2 [6] corresponds to tetragonality ratios as high as 1.27. The value ~50 C/cm 2 is less than a half of what is measured for the spontaneous polarization at room temperature (~130 C/cm 2 ) for ball-milled core-shell BaTiO3 nanoparticles. A discrepancy of more than a factor of two between the considered 2-4-6 and 2-4-6-8 LGD models and experiments [6,11] may be explained by several factors not considered in this work. It can be the dependence of the anisotropic linear electrostriction coefficients on the preparation way of the nanoparticles, which can be extraordinarily high due to the interface effects [41]. can reach giant values for milled nanoparticles of (5 -10) nm radius and quasi-cubic shape, because evolved surface and corners contribute to the formation of inhomogeneous internal strains. Another possibility is a negative extrapolation length in the boundary conditions for polarization, which would support the surface-induced polarization enhancement, despite the extrapolation length effect is short-range [44]. The Vegard strains can improve the electrocaloric properties of a core-shell ferroelectric nanoparticle due to the strain-induced polarization enhancement. In particular, the tensile Vegard strain of (3-6)% increase the spontaneous polarization up to 50 C/cm 2 , and the spontaneous polarization in turn lowering the EC cooling temperature up to 6 K (in comparison with an unstrained bulk BaTiO3, where the change in the EC temperature cannot exceed 2.5 K). Thus, the dense nanocomposites containing core-shell BaTiO3 nanoparticles can be important for applications as nanocoolers. A1. Electric field and LGD free energy of the core-shell nanoparticle We consider a ferroelectric nanoparticle core of radius with a three-component ferroelectric polarization vector . The core is regarded as insulating, without any free charges. It is covered with a semiconducting paraelectric shell of thickness Δ that is characterized by an isotropic relative dielectric permittivity tensor = . The core-shell nanoparticle is placed in a dielectric medium (polymer, gas, liquid, air, or vacuum) with an effective dielectric permittivity, e. The core-shell geometry is shown in Fig. 1 of the main text. Since the ferroelectric polarization contains background and soft mode contributions, the electric displacement vector has the form = 0 + inside the core. In this expression is a relative background permittivity of the core [48], 0 is the universal dielectric constant, and P is a ferroelectric polarization containing the spontaneous and field-induced contributions. As a rule, 4 < (A.2g) The coefficient linearly depends on temperature T: where is the inverse Curie-Weiss constant and ( ) is the ferroelectric Curie temperature renormalized by electrostriction and surface tension as [49,50]: where is a Curie temperature of a bulk ferroelectric. is the sum of the electrostriction tensor diagonal components, which is positive for most ferroelectric perovskites with cubic m3m symmetry in the paraelectric phase, namely 0.004 < < 0.04m 4 /C 2 . Tensor components , , and are listed in Table AI. The gradient coefficients tensor are positively defined and regarded as temperature-independent. The following designations are used in Eq.(A.2e): is the stress tensor, is the elastic compliances tensor, , , and are the linear and two nonlinear electrostriction tensors, whose values and/or ranges are listed in Table AI. Since  is relatively small, not more than (1 -4) N/m for most perovskites, and to focus on the influence of linear and nonlinear electrostriction effects, we do not consider the surface tension and flexoelectric coupling in this work and set = 0 and = 0. Allowing for the Khalatnikov mechanism of polarization relaxation [52], minimization of the free energy (A.2) with respect to polarization leads to three coupled time-dependent Euler-Lagrange equations for polarization components inside the core, where the subscript = 1, 2, 3, and Γ and is the temperature-dependent Khalatnikov coefficient [53]. The boundary condition for polarization at the core-shell interface = is: * The values of for the crystalline shell are smaller than the for the crystalline core, which is elastically harder than the BaTiO3 core. An amorphous shell, which is softer than the BaTiO3 core, would correspond to a value of larger than the . [41], we can assume that the possible range of variation can be even wider in the coreshell BaTiO3 nanoparticles. Indeed, the linear and nonlinear electrostriction coefficients, and , should depend on the preparation way of the nanoparticles and their chemical purity. These speculations give us some grounds to vary within the range from -1.5 m 8 /C 4 to +1.5 m 8 /C 4 looking for the optimal values, which correspond to the highest spontaneous polarization and the best related properties. A2. The core stress induced by the Vegard strains in the shell During the ball-milling the mechanochemical reaction at BaTiO3 nanoparticle surface results in the formation of core-shell nanoparticle. Due to the strong Vegard strains in the shell the elastic mismatch between the core and shell lattices appears and results in the core stress. Below we calculate the stress induced by the Vegard strains in the paraelectric core. We also note that the ball milling makes the particle surface rough, however for the sake of simplicity we assume the particle to be spherical. In order to find the elastic fields analytically, we use a perturbation approach. At first let us consider an isotropic elastic problem, which has a spherical symmetry, being also consistent with the cubic symmetry of the paraelectric core-shell nanoparticle placed in a soft matter matrix. The elastic displacement in a spherical coordinate frame is given by expression, = { ( ), ( ), ( )}, where = = 0 for a spherically-symmetric case. In this case, the displacement vector satisfies the equation [67] grad( ) ≡ 1 2 ( 2 ) = 0. (A.9) From Eq.(A.9), ( + 2 ) = 1 , and therefore the general solution of the equation (A.9) in the particle core ("c") and shell ("s") is: The strain tensor components are , = , and , = , = , , and their explicit form is Substituting the solution (A.10) into Hooke's law relating the elastic stress and strain tensors, , and , , we obtain the following expressions for radial stresses: that the electric field and polarization in the core are homogeneous and directed along the polar axis X3. However, an inhomogeneous stray electric field can exist in the shell, and therefore we consider the total polarization, 2 = 1 2 + 2 2 + 3 2 , of the shell. Since the screening length  of the shell is small (less than 1 nm) we can neglect the stray field, and thus omit the electrostriction term,
2023-08-23T06:45:34.725Z
2023-08-21T00:00:00.000
{ "year": 2023, "sha1": "1b87ce5331ffa594063e430a760ee93dfd2cefe2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b87ce5331ffa594063e430a760ee93dfd2cefe2", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
249699131
pes2o/s2orc
v3-fos-license
THE UTILIZATION OF WEATHER RESEARCH FORECASTING (WRF) MODEL OF 3DVAR (THREE DIMENSIONAL VARIATIONAL) AND HIMAWARI-8 SATELLITE IMAGERY TO THE HEAVY RAIN IN PALANGKARAYA (CASE STUDY: APRIL 27, 2018) On April 27, 2018 heavy rain was occurred in Palangkaraya. Based on surface data observations at Tjilik Riwut Meteorological Station, the peak of rain occurred between 18-21 UTC, which 54 mm within 3 hours. As a result, the flood inundated on the following day. This research purposed to discover the cause of heavy rain used the WRF model of 3DVar technique that assimilated with AMSU-A satellite which used the tropical physic suite parameterization scheme and Himawari-8 Satellite (IR-1 data), processed by Python Programming. Based on the results, the WRF of the 3DVar model is not representative enough in total rainfall results. However, several weather disturbances show the potency for severe weather occurrence from WRF 3DVar modelling. These are indicated by the shear line and eddy circulation at 18 and 21 UTC, and the time series of air pressure decreases with a 0.5 Mb tendency between 15 to 18 UTC. Moreover, the cloud top temperature graph from Himawari-8 Satellite data shows a drastic reduction in temperature to -61.4323 at 18.20 UTC, which supports the heavy rain process. The weather analysis above show that WRF 3DVar is not representative enough for total rainfall result, but appropriate for other weather aspects (shear line, eddy, and air pressure). Therefore, the heavy rain is caused by shear line and eddy condition, air pressure and low temperature of the cloud top. Introduction The ability of a model to predict weather conditions, not only depends on the resolution of the model and the accuracy of physical and dynamic processes, but also depends on initial or initial conditions [1]. Therefore, a data assimilation program is needed and valuable for entering observation data into numerical calculations. This program can be used to update initial conditions for computing WRF [2]. The variational approach that makes us consider all observations around the world simultaneously is called the 3DVar method. This 3DVar implementation is helpful for improving performance on the WRF (Weather Research and Forecasting) model. The 3DVar system is applied to multiresolution domain forecasting systems [3]. Research example that using the WRF 3DVar method are ever done by Paski (2017) [8]. Therefore, this research will use WRF 3DVar to modeled the heavy rain event. This research was conducted in Palangkaraya on April 27, 2018, when heavy rain caused the flood. The heavy rain-soaked dozens of resident's houses in several villages in Palangkaraya. The heavy rain occurred between 18-21 UTC based on Observations data at Tjilik Riwut Meteorological Station in Palangkaraya, which is 54 mm in 3 hours. According to BMKG, the criteria for heavy rain is 10-20 mm/hour or 50-100 mm/day, so that it can be categorized as heavy rainfall. The research about utilize WRF-ARW in Palangkaraya has been done by Swastiko and Rifani (2016) [9], which use 20 parameterization schemes. Therefore, to improve the heavy rain analysis and result, this research will utilize the WRF of 3DVar technique, which assimilated with the AMSU-A satellite with the tropical physic suite parameterization scheme, where WRF 3DVar technique and tropical physic suite parameterization scheme has never been done before in Palangkaraya. In addition to analyzed the causes of heavy rain, a comparison of rainfall results based on WRF-ARW output and WRF 3DVar techniques will be analyzed and compared. Moreover, the result of IR-1 data from Himawari-8 satellite is used to knowing the cloud top temperature condition when rain occurred. It will be processed using Python Programming. Data. (1) Final Analysis (FNL) data. FNL data is a model input data of WRF-ARW, and it is available on https://rda.ucar.edu. It is 24 hours FNL data with 0,250 x 0,250 resolution. (2) Global Data Assimilation System (GDAS) Data for AMSU-A Satellite. The data to be assimilated is AMSU-A satellite data. Satellite data can be obtained from www.rda.ucar.edu/datasets/ds735.0/[10]. (3) Infrared channel of Himawari-8 satellite data. The data is available on ftp://202.90.199. (4) Rainfall data from AWS Digi Palangkaraya Meteorological Station. AWS data is obtained from the BMKG AWS center. Rainfall data is used to verification the results of onepoint rainfall on the WRF 3DVAR output model. Method. The method in this research is to verificate WRF-ARW and WRF 3DVAR rainfall results on rainfall from AWS Digi Palangkaraya Meteorological Station. Also, the output of the WRF 3DVar model will be reviewed and analyzed descriptively. The output will be processed with GrADS (The Grid Analysis and Display System). Moreover, the domain and spatial resolution that used in this research is based on Swastiko and Rifani (2016) [9], which was used in previous research in Palangkaraya. Meanwhile, the IR-1 channel of Himawari-8 satellite data will be processed with Python programming to obtained the cloud top temperature value on April, 27 2018. Result and Discussion Rainfall and The Verification. Based on the WRF-ARW model and the WRF 3DVar model output, which used the tropical physics suite parameterisation scheme, that model is not responsive enough to captured the heavy rain moments. This can be indicated by the big difference in rainfall between the WRF-ARW model, the WRF 3DVar model and rainfall data from AWS Digi Palangkaraya Meteorological Station. Therefore, for heavy rain events on April 27, 2018, the WRF-ARW model and the WRF 3DVar model did not represent to capture the heavy rain events. Moreover, tropical physics suite parameterization scheme is not suitable to detect the heavy rain in Palangkaraya. The same thing also happened in previous research [11], where the tropical physics suite parameterization scheme was not able to capture heavy rain events. Therefore, this scheme has mismatches in some areas to detect heavy rain moment. Wind Pattern Analysis. The stream line from the WRF 3DVar model below shows that there is significant wind pattern at 18 and 21 UTC. At 18 UTC, the wind forms the strong and tight convergence with shear line in Palangkaraya. Thus, increasing the cloud cumulation, which caused heavy rain. Meanwhile, there is an eddy circulation at 21 UTC, which gave the big impact to produced heavy rain that caused the flood. Additionally, the strong convergence, tight shear line and eddy circulation were in one situation that across Palangkaraya. Therefore, WRF 3DVar model using tropical physics suite parameterization scheme or the wind pattern output can illustrate significantly the cause of heavy rain in Palangkaraya. Based on the WRF 3DVar model output or surface air pressure output, it can be seen that air pressure decreases with a tendency of 0.5 Mb between 15 UTC and 18 UTC (air pressure tendency for 3 hours). At 18.00 UTC, the heavy rain occurred which caused flood. The air pressure reduction contributed to produce heavy rain in Palangkaraya, because it is indicated that there is a probability of bad weather occurrence. Therefore, air pressure is one of the indicators that can give indication of bad weather occurrence [12]. Conclusion Based on the model of WRF 3DVAR technique with tropical physics suite parameterization, the model are not representative concerning to heavy rain events. There is significant difference between the rainfall data from AWS and the models. In addition, wind pattern from WRF 3DVar shows a disturbance in the form of convergence and the sharp shear line, which triggers the cloud cumulation at 18 UTC. Other than that, eddy circulation contributes to give the bad weather in Palangkaraya at 21 UTC. Moreover, based on the surface air pressure, the WRF 3DVAR output model shows the decreasing of air pressure around 0.5 Mb, which contributes to causing heavy rain in Palangkaraya. Subsquently, based on the Infrared (IR-1) channel of Himawari-8 satellite data results, the top cloud temperature reaches -61.4323 0 C at 18.20 UTC. That temperature shows that Cb clouds in a mature phase and ready in resulting the heavy rainfall in Palangkaraya. Low cloud temperatures persist until 23.40 UTC. Therefore, wind pattern, surface air pressure and the top cloud temperature give the essential effect on heavy rain event in Palangkaraya. Suggestion In further research, it will be necessary for using WRF 4dvar, the other parameterization schemes and comparisons with the other data such as GSMaP to improve the results of rainfall analysis.
2022-06-16T15:07:51.914Z
2022-06-14T00:00:00.000
{ "year": 2022, "sha1": "fe218d20a7c5242d1035ee802e417153b6234d84", "oa_license": "CCBYNC", "oa_url": "http://puslitbang.bmkg.go.id/jmg/index.php/jmg/article/download/790/pdf_1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0dc89f3b2a9921b1b5e75cad27f1f1180ab7df31", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
18787988
pes2o/s2orc
v3-fos-license
The AF-1-deficient estrogen receptor ERα46 isoform is frequently expressed in human breast tumors Background To date, all studies conducted on breast cancer diagnosis have focused on the expression of the full-length 66-kDa estrogen receptor alpha (ERα66). However, much less attention has been paid to a shorter 46-kDa isoform (ERα46), devoid of the N-terminal region containing the transactivation function AF-1. Here, we investigated the expression levels of ERα46 in breast tumors in relation to tumor grade and size, and examined the mechanism of its generation and its specificities of coregulatory binding and its functional activities. Methods Using approaches combining immunohistochemistry, Western blotting, and proteomics, antibodies allowing ERα46 detection were identified and the expression levels of ERα46 were quantified in 116 ERα-positive human breast tumors. ERα46 expression upon cellular stress was studied, and coregulator bindings, transcriptional, and proliferative response were determined to both ERα isoforms. Results ERα46 was expressed in over 70% of breast tumors at variable levels which sometimes were more abundant than ERα66, especially in differentiated, lower-grade, and smaller-sized tumors. We also found that ERα46 can be generated via internal ribosome entry site-mediated translation in the context of endoplasmic reticulum stress. The binding affinities of both unliganded and fully-activated receptors towards co-regulator peptides revealed that the respective potencies of ERα46 and ERα66 differ significantly, contributing to the differential transcriptional activity of target genes to 17β estradiol (E2). Finally, increasing amounts of ERα46 decrease the proliferation rate of MCF7 tumor cells in response to E2. Conclusions We found that, besides the full-length ERα66, the overlooked ERα46 isoform is also expressed in a majority of breast tumors. This finding highlights the importance of the choice of antibodies used for the diagnosis of breast cancer, which are able or not to detect the ERα46 isoform. In addition, since the function of both ERα isoforms differs, this work underlines the need to develop new technologies in order to discriminate ERα66 and ERα46 expression in breast cancer diagnosis which could have potential clinical relevance. Electronic supplementary material The online version of this article (doi:10.1186/s13058-016-0780-7) contains supplementary material, which is available to authorized users. Background Breast cancer is a major public health concern because its incidence continues to rise. It is the second most common cancer overall and by far the most frequent cancer among women [1]. The etiology of breast cancer is multifactorial, and although the mechanisms of carcinogenesis remain poorly defined the role of hormones is recognized as a major risk factor in breast cancer development, in particular 17β estradiol (E2) and its derivatives. Estrogen receptor (ER)α is one of two ERs and is involved in several key aspects of breast cancer diagnosis [2]. Firstly, ERα protein immunoreactivity in the nucleus of mammary epithelial cells is systematically evaluated and quantified during anatomopathological diagnosis, with 70% of breast cancers initially described as ERα-positive [2]. Secondly, ERα expression in breast cancers correlates with improved survival rates and reduced risk of recurrence and metastases [3][4][5]. Finally, the blockade of ERα activity represents a major targeted therapy for ERα-positive breast cancer, with tamoxifen and aromatase inhibitors having already benefitted millions of women [6]. Despite the success of these treatments, 30 to 40% of patients develop resistance [7]. This highlights the need for further in-depth characterization of ERα-positive tumors and a full understanding of the mechanisms underlying the disease in order to propose new therapeutic approaches. In addition to the "classic" full-length 66-kDa ERα (ERα66) which harbors the two activation functions, AF-1 and AF-2, two other isoforms of 46 kDa (ERα46) and 36 kDa (ERα36) have been characterized. ERα36 differs from ERα66 by lacking both transcriptional activation domains (AF-1 and AF-2) and encoding a unique 29 amino acid sequence [8]. In contrast, ERα46 only lacks the first 173 Nterminal amino acids which harbors AF-1 and is thus completely identical to the amino acids 174 to 595 of ERα66 (Fig. 1a). ERα46 has been reported to be expressed in various cell types such as human osteoblasts [9], macrophages [10], and vascular endothelial cells [11], but also in cancer cells such as colorectal tumor tissues [12] and tamoxifenresistant breast cancer cell lines [13]. Mechanisms regulating both the expression of ERα46 and its functions remain essentially unknown. It can be generated by either alternative splicing [14], proteolysis [15], or an alternative initiation of translation via an internal ribosome entry site (IRES) [16]. This latter mechanism generates two different proteins from a single RNA. A few studies have suggested that ERα46 plays an inhibitory role in the growth of cancer cell lines, suggesting that ERα46 could affect tumor progression. The overexpression of ERα46 in proliferating MCF7 cells provoked cell cycle arrest in G0/G1 phase and inhibited ERα66-mediated estrogenic induction of the AF-1-sensitive reporters c-fos and cyclin D1, as well as estrogen-responsive element (ERE)-driven reporters [14,17]. It was also shown that ERα46 inhibits growth and induces apoptosis in human HT-29 colon adenocarcinoma cells [12]. This inhibition likely occurs through competition between ERα66 and ERα46 homodimers and heterodimers for binding to the ERE [17]. The role of the AF-1-deficient ERα46 isoform has also been questioned in vivo using mice deficient in the ERα A/B domain (named ERαAF-1 0 ), which express only a short 49-kDa isoform that is functionally similar to ERα46. These ERαAF-1 0 mice revealed a complete infertility phenotype [18] that was associated with an altered proliferative effect of E2 on the uterine epithelium and a loss of its transcriptional response in this tissue [19]. Thus, the roles and functions of this ERα46 isoform appear to be different from those of full-length ERα66. The expression level of this truncated isoform in human breast tumors remains unknown, even though the expression of a 47-kDa isoform of ERα in human breast cancers was reported more than two decades ago [20]. Currently, several antibodies are used for immunohistochemical detection of ERα in human breast tumor diagnosis but most of them have not yet been thoroughly characterized in terms of ERα46 recognition. In this study, we first characterized the various antibodies commonly employed in immunohistochemical diagnosis for their ability to detect ERα46. We then analyzed the relative expression of the ERα isoforms in a panel of 116 ERα-positive breast tumors. We also examined the mechanism of ERα46 generation and its specificities in term of coregulator binding and of functional activities. Immunohistochemistry Cells were formalin-fixed and paraffin-embedded using the Shandon™ Cytoblock™ Cell Block Preparation System, according to the manufacturer's protocol. Immunohistochemistry was performed with a Dako Autostainer Link 48 on 3-μm sections. Antigen retrieval was performed using a Dako PT Link pressure cooker in pH 6.0 citrate buffer. An EnVision™ system was used for antibody detection. The anti-Flag (M2; Sigma-Aldrich) and a panel of anti-ERα antibodies (SP1 (Abcam), HC20 (SantaCruz), 6 F11 (Novocastra), 1D5 and EP1 (Dako)) were used. Fig. 1 Recognition of estrogen receptor alpha (ERα) isoforms by antibodies used for human breast cancer diagnosis. a Schematic representation of the ERα66, ERα46, and ERα36 isoforms. The location of the known epitopes used for the generation of antibodies is indicated. b Representative Western blot analyses with the SP1 antibody on extracts from MDA-MB-231 cells transfected with plasmids encoding either ERα46 (MDA-46 kDa) or ERα66 (MDA-66 kDa) and from MCF7 cells which express both isoforms, or c with the Anti-Flag antibody on extracts from MDA-MB-231 cells transfected with plasmids encoding the ERα36 isoform. d The different antibodies used in breast cancer diagnosis (1D5, 6 F11, SP1, and EP1) were tested for their ability to recognize either ERα66, ERα46, or both isoforms in immunocytochemistry experiments performed in MCF7, MDA-ERα46, and MDA-ERα66 or MDA-ERα36 cells. e Representative picture of Western blot experiments evaluating the expression of both ERα isoforms in MCF-7 cells, as determined by the different antibodies indicated. Protein extracts prepared from MDA cells were used as an ERα-negative control. AF activation function, DBD DNA-binding domain, LBD ligand-binding domain Human breast cancer sample collection The retrospective study used tumors samples from patients diagnosed with invasive breast carcinoma, established as being ERα-positive on a previously performed biopsy (see Additional file 1 ( Figure S1) for clinical parameters of the patients used). The diagnosis was performed with the 6 F11 or 1D5 antibodies between 2011 and 2014. Tumors were frozen in 1.5-ml cryotubes using the Snap-FrostII™ fast freezing system (Excilone, France) and stored at -80°C. Patients with an ipsilateral recurrence of breast cancer who had received neoadjuvant chemotherapy or chemotherapy treatment for another disease or who received thoracic radiation therapy (recurrence or another pathology) were excluded. All tumors were classified by the anatomopathologist (human epidermal growth factor receptor-2 (HER2) status, tumor size, ERα overexpression, lymph node involvement, histological type) and were graded according to Elstone and Ellis' guidelines [23]. The analysis was performed on a series of 116 ERα-positive invasive ductal or lobular breast carcinomas (22 grade I, 60 grade II, and 34 grade III). Immunoprecipitation and proteomic analysis ERα-enriched protein fractions from tumor protein extracts were obtained through immunoprecipitation using the antihuman ERα primary HC20 antibody. Following their purification using Protein G sepharose beads, a first Western blot was performed to check the efficiency of the immunoprecipitation. In parallel, the immunoprecipitate was diluted with Laemmli buffer, then separated by SDS-PAGE using a short and low-voltage electrophoretic migration. After Instant Blue staining, the bands corresponding to ERα46 and ERα66 were respectively excised from the gel. Proteins were in-gel digested by trypsin, and resulting peptides were extracted from the gel and analyzed by nano-liquid chromatography coupled to tandem mass spectrometry (LC-MS/ MS) using an ultimate 3000 system (Dionex, Amsterdam, Netherlands) coupled to an LTQ-Orbitrap Velos mass spectrometer (Thermo Scientific, Bremen, Germany). The LTQ-Orbitrap Velos was operated in data-dependent acquisition mode with the Xcalibur software. Survey scan MS spectra were acquired in the Orbitrap in the 300-2000 m/z range with the resolution set to a value of 60,000. The twenty most intense ions per survey scan were selected for collision-induced dissociation fragmentation, and the resulting fragments were analyzed in the linear ion trap (LTQ, parallel mode, target value 1e4). Database searches from the MS/MS data were performed using the Mascot Daemon software (version 2.3.2, Matrix Science, London, UK). The following parameters were set for creation of the peak lists: parent ions in the mass range 400-4500, no grouping of MS/MS scans, and threshold at 1000. Data were searched against SwissProt 20130407. Mascot results were parsed with the in-house developed software MFPaQ version 4.0 (Mascot File Parsing and Quantification) (http://mfpaq.sourceforge.net/) and protein hits were automatically validated with a false discovery rate (FDR) of 1% on proteins and 5% on peptides (minimum peptide length of six amino acids). Plasmids, lentiviral production, and luciferase assay cDNA coding for the A/B (amino acids 2-173) domain of the human ESR1 gene encoding ERα was amplified by polymerase chain reaction (PCR) and cloned into the SpeI and NcoI sites of the pTRIP CRF1AL2 bi-cistronic vector that encodes both the Renilla luciferase (LucR) and Firefly luciferase (LucF2CP) genes separated by this putative IRES-ERα sequence [24]. The final construct was verified by sequencing. In such a transgene, LucR expression is cap-dependent whereas LucF expression is IRES-dependent; thus, the level of IRES activity can be deduced from the LucF/LucR ratio. The production of lentiviral particles was performed in HEK293 cells. Transduced MDA-MB 231 cells (MDA-A/B) were subjected to ER stress as indicated. To test whether the stress-induced increase in LucF activity was not due to the generation of mono-cistronic LucF transcripts via an internal promoter or cryptic splicing, MDA-Lenti-AB (1/10) cells were exposed to two siRNAs-lucR and treated with 5 mM DTT or 100 nM thapsigargin. As control, cells were treated with scrambled siRNA. After a PBS wash, cells were frozen at -80°C. Luciferase measurements were performed with a LB960 luminometer (Berthold) using the dual reporter assay kit (E1960; Promega) according to the manufacturer's recommendations. RT-qPCR MDA-MB231, MDA-ERα46, and MDA-ERα66 cells were plated in 9-cm diameter dishes in DMEM/0.5% charcoalstripped FCS (csFCS) containing appropriate antibiotics in order to reach confluency 3 days later. Cells were then treated with 10 -8 M E2 final for 4 h or with a similar volume of ethanol (vehicle). Total RNAs were then purified using the Trizol™ reagent (Life Technologies, Inc.) according to the manufacturer's instructions. RNA (2 μg) was used as a template for reverse transcription (RT) by the M-MLV reverse transcriptase (Invitrogen) and Pd(N)6 random hexamers (Amersham Pharmacia Biosciences). Quantitative PCR used on 2 μl of 1/10th diluted RT reactions and 1 μM of specific oligonucleotides and were performed on BioRad CFX96 machines using BioRadiQ SYBR Green supermix with 50 rounds of amplification followed by determination of melting curves. Primers for RT-PCR were designed under the QuantPrime design tool (http://www.quantprime.de [25]). Independent triplicate experiments were conducted twice, and all values were normalized to Rplp0 mRNA. Significant variations were evaluated using the GraphPadPr-ism™ software. Coregulator-peptide interaction profiling Ligand-mediated modulation of the interactions between the ERα46 and ERα66 proteins and their coregulators was characterized by a MARCoNI (Microarray Assay for Realtime Coregulator-Nuclear receptor Interaction; PamGene International BV, the Netherlands). This method has been described previously [26,27]. Briefly, each array was incubated with a reaction mixture of crude lysates from MDA-MB-231 cells stably expressing each isoform of ERα46 or ERα66 on buffer F (PV4547; all Invitrogen) and vehicle (2% DMSO in water) with or without the receptor ligands at the indicated concentrations. ERα66 was quantified by enzymelinked immunosorbent assay (ELISA; Active Motif, USA) and ERα46 was normalized to ERα66 by Western blot analyses. SP1 antibody which specifically recognized both isoforms was used to detect the ERα bound on the PamChip microarray. For both ERα46 and ERα66 receptors, a doseresponse curve was performed from 10 -12 to 10 -7 M E2 to directly compare their response to E2. For measurements of antagonist effects with 4-hydroxytamoxifen and Fulvestrant, 6.3 nM ( 10 -8.2 M) E2 was applied since both receptors were fully active at that concentration. Incubation was performed at 20°C in a PamStation96 (PamGene International). Receptor binding to each peptide on the array was detected by SP1 antibody. The secondary anti-rabbit antibody conjugated to fluorescein and the goat anti-mouse antibody conjugated to fluorescein were used and given a fluorescent signal, which was further quantified by analysis of .tiff images using BioNavigator software (PamGene International). Statistical analyses Comparisons between groups were performed using the Mann-Whitney rank sum test for continuous variables. Correlations between continuous variables were evaluated using the Spearman's rank correlation test. All P values are two-sided. For all statistical tests, differences were considered significant at the 5% level. Statistical analyses were performed using the STATA 13.0 software (STATA Corp, College Station, TX) or GraphPad Prism v.5. Characterization of the anti-ERα antibodies commonly used for breast tumor diagnosis Apart from lacking the A/B domain and thus the AF-1 transactivation function, the ERα46 isoform is completely identical to the ERα66 isoform (Fig. 1a). Therefore, to characterize the expression of ERα46 in breast tumors, an antibody must be used that is directed against the Cterminal domain. This excludes 1D5, one of the first monoclonal antibodies to be available against ERα for tumor diagnosis [28], targeting an epitope in the A/B domain (Fig. 1a). Later on, the respective murine and rabbit monoclonal antibodies 6 F11 and SP1, with improved specificities compared to the 1D5 clone, became extensively used for diagnosis [29,30]. More recently, the monoclonal rabbit antibody EP1 was commercialized. However, whereas the SP1 epitope is known to be in the C-terminal domain, the abilities of the 6 F11 and EP1 antibodies to recognize ERα46 have not, to our knowledge, been reported. To test this, we used control ERα-negative MDA-MB-231 cells and MDA cells engineered to stably express either the ERα46 or ERα66 isoform or to transiently express the Flagtagged ERα36 protein, alongside MCF7 cells co-expressing both proteins ( Fig. 1b and c, and Additional file 1: Figure S2). Interestingly, a small amount of ERα46 expression was found in MDA 66-kDa cells, presumably due to an alternative initiation of translation at Met 174 and/or Met 176 as previously suggested [16]. Immunocytochemistry performed on these five cell lines demonstrated that, among the four tested antibodies (1D5, 6 F11, SP1, and EP1), only SP1 was able to specifically detect the ERα46 isoform in the MDA 46-kDa cells (Fig. 1e). Of note, none of these antibodies was able to recognize the ERα36 isoform by immunocytochemistry. The immunoreactivity of the different antibodies was also tested by immunoblotting with the HC-20 antibody, which is frequently used in Western blot analyses, but not for diagnosis since it is a polyclonal rabbit antibody (Fig. 1e). Whereas EP1 and 6 F11 only detected ERα66, the HC-20 and SP1 antibodies recognized both ERα46 and ERα66, which is well in line with the immunocytochemistry results. We also noticed that the 1D5 antibody had a quasi-undetectable reactivity when used in this procedure. Altogether, these data demonstrate that from the set of antibodies commonly used for breast cancer diagnosis, the SP1 antibody is the only one able to recognize the ERα46a isoform by immunohistochemistry. Quantification of ERα46 in human breast carcinomas Using SP1 antibody, we next performed Western blotting of 116 ERα-positive breast tumor samples (initially characterized with the 6 F11 or 1D5 antibodies) to compare the relative abundance of ERα46 and ERα66. Patients included in this study had not have received any neoadjuvant therapy. Most of the breast tumors (70%) expressed both isoforms, though at varying levels ( Fig. 2a and b). The ERα46/ERα66 ratio varied from 0 to 3.48, with a mean average of 0.37 and a median of 0.16. Furthermore, even though the vast majority of tumors expressed lower levels of ERα46 than ERα66, 10% of the tumors tested expressed predominantly the shorter isoform (Fig. 2c). We next analyzed the relationship between clinical parameters (grade and size of tumor) and ERα46 expression. We found that high-grade tumors correlated with lower ERα46 expression since 91% of grade I tumors expressed ERα46, whereas this figure was 75 and 62% for grades II and III, respectively (Fig. 2d). Moreover, the ERα46/ERα66 ratio of the relative expression of these isoforms was also significantly higher in low-grade tumors compared to tumors of grades II and III which are highly dedifferentiated (P = 0.0024 and P = 0.0059, respectively; see Fig. 2e). The abundance of ERα46 was also inversely correlated with tumor size (Fig. 2f). Finally, we classified our samples using a size parameter usually used by the American Joint Committee on Cancer (AJCC) to characterize tumor evolution, which is set at a 2-cm cut-off. Using this classification, we found that ERα46 expression was higher in small-sized tumors compared to tumors greater than 2 cm in diameter (P = 0.0039; Fig. 2g). Interestingly, even though there was a few HER2-positive tumors among our samples, a significant correlation was found between HER2 expression and expression of ERα46, indicating that HER2positive tumors have low abundance of ERα46 (Additional file 1: Figure S3). All other parameters, including necrosis, were not significant. A few studies have shown that 8% of tumors diagnosed as ERα-negative using the 1D5 antibody were actually positive for ERα when tested with next-generation antibodies such as SP1 [29][30][31]. The authors did not take into account the presence of ERα46, which cannot be detected by the 1D5 antibody. We therefore explored the possibility that these tumors may not express ERα66 but only ERα46 by evaluating the expression of the ERα46 isoform in a series of 19 tumors identified as ERα-negative using the 6 F11 antibody. However, none of these samples were found to express the short ERα46 isoform. A representative sample is shown in Fig. 2a. This study remains preliminary and should be extended to a larger sample series (in process). Altogether, these data obtained by analyzing the expression of ERα46 in a panel of 116 ERα-positive breast tumors highlights the fact that ERα46 was expressed in more than 70% of cases. Furthermore, although the expression of this short isoform was highly variable, it correlated with the tumor evolution stage with a higher expression in lowgrade tumors and lower expression in tumors that were larger, less differentiated, and of higher-grade. Identification of the ERα46 isoform Although the bands observed by Western blot analysis were of the expected sizes, we wanted to confirm the nature of the detected proteins. To reach this aim, we purified the ERα proteins from MCF7 cells and from lysates of four tumor samples by immunoprecipitating the two ERα isoforms using the anti-human ERα primary HC20 antibody (Additional file 1: Figure S4A). After separation by SDS-PAGE, the gel bands corresponding to the 46-kDa and 66-kDa proteins were excised and further digested for proteomic analysis. In MCF7 cells (Table 1), 24.4% of the ERα66 sequence was covered, including a peptide in the N-terminal domain of amino acids 9-32. Importantly, and as expected, although 23.3% of the ERα46 sequence was detected, no peptide from the N-terminal A/B domain was identified. Proteomic analysis of immunoprecipitated ERα proteins from four tumor samples respectively covered 25% of the ERα66 sequence and 15.3% of the ERα46 sequence (Additional file 1: Figure S4B and S4C). Again, although peptides 184-206, 402-412, and 450-457 were found in the 46 kDa-sized band, no peptides located before the ATG at codon 174 were detected. The first peptide found is 184-206. Therefore, although we were unable to characterize the start codon of ERα46, we confirm for the first time that the 46-kDa band identified in Western blot analyses of ERα-positive tumors is without doubt a shorter isoform of ERα. ERα46 can be expressed following alternative initiation of translation in response to stress It has already been proposed that an alternative initiation of translation could participate in ERα46 generation through an IRES and the presence of two other potential initiation codons (AUG174/176) in the mRNA coding sequence for amino acids 2-173 of ERα [16]. In line with this potential mechanism, we were able to detect ERα46 by Western blot analysis in MDA cells transfected with full-length ERα66 (Fig. 1b). In order to definitively confirm this hypothesis, we analyzed ERα46 expression in MDA Fig. 3a, ERα46 expression was not detected using this ERα46 0 construct, demonstrating that the two potential initiation codons are necessary to generate the ERα46 isoform. We next sought to determine how this putative IRES sequence can be stimulated. IRESs were found to be activated in tumor cells continually subjected to diverse stress conditions of the tumor microenvironment [32,33]. Furthermore, accumulating evidence argues for the presence of chronic stress of endoplasmic reticulum or unfolded protein response (UPR) in different types of cancers, including breast cancer (for a recent review, see [34]). Given the preferential shift towards cap-independent mRNA translation during UPR [35], we hypothesized that endoplasmic reticulum stress might stimulate the translation of open-reading frames downstream of the major initiation codon. To address this question, we transduced MDA-MB-231 cells with a bi-cistronic lentivector carrying the cDNA sequence of the A/B domain (amino acids 2-173) of ERα cloned between LucR and LucF (Fig. 3b). In these transduced MDA cells (Fig. 3c) as well as in transduced MCF7 cells (Additional file 1: Figure S5A), the LucF/LucR ratios Proteomic analysis results from the different cell lines and the four breast tumor samples The % of sequence coverage corresponds to the number of amino acid residues identified in the proteomic analysis divided by the total number of amino acid residues in the protein sequence. The Mascot score is described in [52]. It uses statistical methods to assess the validity of a match. This enables a simple rule to be used to judge whether a result is significant or not. We report scores as -10*LOG10(P), where P is the absolute probability. were found to be significantly increased in response to UPR inducers (i.e., DTT and thapsigargin) (Fig. 3c). These inductions correlated with the observed increase in ERα46 protein levels after stress induction in cells stably transfected with the full-length ESR1 cDNA (Fig. 3d). As a control, we used siRNA directed against LucR which diminished LucF activity (Additional file 1: Figure S5B and C), demonstrating the absence of either an internal promoter in the intervening sequence or stress-induced cryptic alternative splicing that could have shunted the LucR cistron. Taken together, these data suggest that ERα46 can be produced by stress inducers via an IRES-dependent mechanism. ERα46 antagonizes the ERα66-mediated proliferative response of MCF-7 cells to E2 in a dose-dependent manner Next, we explored the impact of a high expression level of ERα46 on E2-induced proliferation of breast tumor cells using MCF-7 cell lines which were engineered to overexpress a tet-inducible ERα46 (Fig. 4). Proliferation in response to E2 is not influenced by tetracycline treatment as demonstrated using MCF7-B0 sub-clone which expresses an empty vector. By contrast, the proliferation in response to E2 is partially abrogated in the MCF-B1 clone after tetracycline induction (ratio of ERα46/66 close to 1) and almost completely abolished using the MCF7-B2 clone (ratio of ERα46/66 close to 10) which expresses the highest expression level of ERα46. These results indicate that overexpression of the ERα46 isoform inhibited the E2-mediated cell proliferation, with inhibition being proportional to the expression of ERα46. Identification of cofactors that differentially interact with ERα66 and ERα46 This inhibition of proliferation may occur through the differential recruitment of coregulators by the ERα46 and ERα66 isoforms in the cellular responses induced by E2. To test this hypothesis, we used the MARCoNI assay to characterize the interaction of the two ERα isoforms with 154 unique coregulator-derived motifs, both in their unliganded (apo) conformation or with concentrations of E2 ranging from 10 -12 to 10 -7 M, corresponding to full ligand saturation and receptor activation [26]. The resulting overall binding patterns (Additional file 1: Figure S6A and B) indicated that, qualitatively, the receptors bind to the same subset of coregulators, with a clear response of the ERα46 isoform to E2. However, an isoform-selective difference in the binding affinities of both apo and fully-activated receptors was also observed (Fig. 5a). Further analysis of the E2 response curves evidenced that: (i) both isoforms behave similarly for some interactions (with BRD8 for instance); (ii) some peptides bind better to one of the isoforms, for example NCOA3 (also named SRC-3) which has a higher affinity for ERα66 and PRGC1 to ERα46; and (iii) some cofactors bind to both isoforms equally in their apo conformation, but increasing E2 concentrations favor their association with one or the other, as observed for the binding of EP300 to ERα66 or NROB2 to ERα46 (Fig. 5a). The hierarchical clustering of ligand-induced modulation of coregulator interactions was then performed to look for differences and was quantified as the log-fold change in binding (modulation index (MI)) (Fig. 5b). This analysis confirmed that, although qualitatively the overall responses looked generally quite similar, there is a quantitative differential modulation with some selective preference to certain coregulator peptides. Upon E2 binding, an overall increased preference for cofactor binding to ERα66 over ERα46 was observed, as shown in Fig. 5c. We then investigated the potency of the ERα antagonists 4-OH-tamoxifen and fulvestrant in inhibiting cofactor binding to ERα46 and ERα66 in the presence of E2. The profile of the EC 50 values for 4-OH-tamoxifen and fulvestrant clearly showed a better efficacy of 4-OH-tamoxifen than fulvestrant in inhibiting E2-induced binding of the receptor isoforms to coregulators (Additional file 1: Figure 6C). However, the potencies of these antagonists to inhibit binding to ERα46 and ERα66 were comparable (Fig. 5d). Altogether, these data clearly demonstrate that the two isoforms show some specificity and heterogeneity in terms of their binding to coregulators. Differential gene expression response to E2 mediated by ERα46 and ERα66 In order to evaluate the impact of differences in coregulator affinity between the two ERα isoforms in terms of transcriptional regulation, we aimed at determining the expression of some target genes in MDA-ERα46 and MDA-ERα66 cells in response to E2. To directly assess the correlation between these events and cell proliferation, we selected a set of genes for their known association with this process, some of them also described as regulated by E2 in MCF7 breast cancer cells or MDA-ERα66 cells (Additional file 1: Figure S7) [36,37]. The data ( Fig. 6 and Additional file 1: Figure S8) indicate that the majority of the tested genes are differentially regulated in MDA-ERα46 and MDA-ERα66 cells. While some genes were found to be regulated by E2 in MDA cells expressing either ERα66 or ERα46, albeit at higher levels for the ERα66 (GREB1, TFF1, and PDGFB), some genes were specifically regulated by the ERα46 (MAPPK14 and CDC14A) or the ERα66 (IER3, CDK6, ASAP1, IL1B, and CCNB2). Moreover, the basal levels of transcription were also differentially affected by the expression of these isoforms as compared to naïve MDAwt cells. Indeed, some genes were specifically affected by either the ERα66 (CDK5 and IL1B), or the ERα46 (GREB1, TFF1, MAPK14, CDK2, and CCNE1) but also by both isoforms (PDGFB, IER3, BRCA1, and TNF). Altogether, these data clearly demonstrate that ERα46 and ERα66 have different transcriptional activities. Discussion The work reported here aimed to analyze the expression levels and characteristics of the overlooked ERα46 isoform in breast tumor samples. We have clearly shown that ERα46 is expressed in the majority of human breast tumors tested (more than 70%) with highly variable expression levels, sometimes even more abundant than ERα66. Importantly, the ERα46/ERα66 expression ratio negatively correlated with tumor grade: poorly-differentiated tumors (of higher grade and larger size) presented lower amounts of ERα46. These data indicate that this shorter isoform may have a potential clinical relevance. Unfortunately, since this retrospective study started in 2011, it is too early to further analyze any correlation between the abundance of ERα46 and overall survival or recurrence of disease. (See figure on previous page.) Fig. 5 Ligand-specific coregulator binding profiles. Interactions of the estrogen receptor alpha (ERα)46 and ERα66 proteins with coregulator motifs were measured using MARCoNI peptide arrays. Interactions were evaluated at different concentrations of E2, ranging from 10 -12 to 10 -7 M. a Examples of dosedependent E2-mediated modulation of ERα46 and ERα66 interactions with individual coregulator motifs to illustrate that the two proteins bind to coregulators with differential affinities. b Heatmap showing hierarchical clustering (Euclidean distance, average linkage) of E2-mediated interactions (represented as the modulation index (MI)) between ERα46 and ERα66 proteins and peptides representing coregulator-derived binding motifs. MI is expressed as log of fold changes relative to vehicle. Zoom outs from the left and the right main clusters are shown below. c Boxplot representation of E2 potency for the modulation of coregulator binding of the two isoforms, using EC 50 values obtained with the curve fit R 2 > 0. 8 This criterion requires a time period of 15-20 years due to delayed tumor relapses of ERα-positive tumors [38]. A previous study reported the expression of a 47-kDa isoform in human breast cancer that is able to bind to radioactive tamoxifen aziridine, which could be the same as the 46-kDa ERα isoform described here [20]. Using electrophoresis with radiolabeled tamoxifen, the authors found that 49% of tumor samples to express the 67-and the 47-kDa protein entities, whereas 36% contained only the longest form. Our proteomic analysis is the first to definitively identify the band detected by Western blot as an ERα isoform deprived of the A/B domain. Our results also show that several of the antibodies currently used for the diagnosis of breast cancer are unable to detect the ERα46 isoform. Indeed, the 1D5, 6 F11, and EP1 antibodies are directed against the A/B domain, which is absent in ERα46 (Fig. 6). As a consequence, the hormonedependent characterization of the tumor, presently performed by immunohistochemistry, may only be based on the expression level of ERα66. This finding highlights the importance of the choice of antibodies used for the diagnosis of breast cancer, which able or not to detect the ERα46 isoform. Furthermore, we found that ERα46 expression level was related to tumor size, suggesting that expression of the 46-kDa isoform in breast tumors could be associated with a limited tumor growth. Such a hypothesis is supported by previous studies demonstrating that ERα46 antagonizes the proliferative effects induced by ERα66 activation both in vitro in MCF7 cells [17] and SaOS osteosarcoma cells [9], as well as in colorectal tumor tissues [12]. Its expression could therefore maintain a low tumor volume, possibly by stimulating apoptosis [12]. We confirmed these data and also found that increasing the amount of ERα46 in MCF7 cells decreases their proliferative response to E2 in a dose-dependent manner. Importantly, other studies have linked the N-terminal region of ERα with cell proliferation. Merot et al. [39] used in vitro systems to show that the respective contribution of AF-1 and AF-2 towards ERα transcriptional activity varies upon the stage of cell differentiation. This key role of AF-1 was also demonstrated physiologically in the uterus, a tissue that is highly sensitive to the proliferative actions of E2 in vivo. Indeed, it was shown that E2 had no proliferative action on uterine epithelial cells in ERα-AF1 0 mice, which express an AF-1-deficient 49-kDa ERα isoform [19]. ERα has been described as being at the crossroads of paracrine or autocrine growth factor and endocrine estrogenic signaling [40], and its activity can be controlled in the absence of E2 through phosphorylation cascades induced by insulin-like growth factor (IGF)-1, epidermal growth factor (EGF), or fibroblast growth factor (FGF)-2. Importantly, most of the residues of ERα that have so far been implicated in these E2-independent responses or in the modulation of ERα activities in response to growth factor signaling are located within the N-terminal region of the protein and constitute an intrinsic part of AF-1 [41]. Altogether, these data support the hypothesis that AF-1 is the region of ERα required for cell proliferation, and that its absence in the ERα46 isoform is likely to confer specific properties to this protein compared to the ERα66 isoform. In our study, we also analyzed the ability of the two isoforms to bind cofactors using the MARCoNI assay, and found that the two isoforms show some heterogeneity in terms of binding to coregulators. The ability of the two apo-ERα isoforms to recruit transcription factors to the pS2/ TFF1 promoter was previously compared by Re-ChIP experiments [22]. This study identified that ERα46 specifically recruited components of the Sin3 repressive complex (NCOR/SMRT) to the TFF1 promoter in the absence of E2. This was associated with specific inhibition of the basal transcription of the TFF1 gene by the ERα46 isoform. More recently, the quaternary structure of a biologically active ERαcoactivator complex on DNA has been determined by cryoelectron microscopy [42]. This study showed the location of the AF-1 domain in the complex, which supports a role in the recruitment of the coactivator SRC-3. Interestingly, in our assay we also found a stronger binding of ERα66 to some peptides derived from SRC-3 (Fig. 5b). In contrast, the ERα46 isoform was found to bind to NRB02 better than the ERα66 isoform. NRB02 acts as a negative regulator of receptor-dependent signaling pathways [43]. These data therefore underline the importance of the AF-1 domain for full transcriptional activation of the ERαcoactivator complex. Indeed, our study also demonstrates a differential gene expression induced by ERα46 or ERα66 at the basal level but also in response to E2. Interestingly, among these differentially regulated genes, ERα46 specifically upregulated the MAPK14 and CDC14 genes in the presence of E2 (respectively 20-and 1.7-fold) as opposed to the ERα66 isoform. These genes are implicated in the suppression of the cell proliferation [44,45] and these regulations may at least partly explain the reduced E2-mediated proliferative response observed when ERα46 is co-expressed. Although not significant, the originally identified proapoptotic HRK gene [46] also exhibited a slight tendency to be specifically regulated by the ERα46 (twofold, P = 0.09). Our observations raise the hypothesis that the presence of the short ERα46 isoform in breast tumors could indicate a more favorable prognosis. Such an assessment is also supported by the study of Klinge et al. who indicated that almost 40% of patients developing a secondary tamoxifen resistance exhibit a reduced expression of ERα46 [13]. This supports the idea that endocrine resistance is associated with a decreased expression of ERα46 and thus with poor breast cancer prognosis. Subtle interactions between these isoforms could influence the action of selective estrogen receptor modulators (SERMs) against tumor growth and metastasis. Interestingly, tamoxifen antagonizes the AF-2 of both the ERα66 and ERα46 isoforms, but at the same time acts as an agonist on AF-1 of ERα66 in a tissuedependent manner. Due to the lack of AF-1, tamoxifen cannot elicit such an action on ERα46. As shown in our analysis with the MARCoNI assay, ERα46 appears to be as potent as the ERα66 in dissociating coregulatory binding in response to tamoxifen or fulvestrant. However, since these interactions are very complex, further investigations are needed in tumor samples in vivo. Altogether, these data point out the importance of the expression of both ERα isoforms in breast tumors. In the absence of an ERα46-specific antibody, automated immunoblot analyses would be necessary to render ERα46 detection practically feasible in breast cancer diagnosis. However, it cannot be ruled out that new techniques based on structural properties of the two estrogen receptors could appear in the future [47]. Further characterization of ERα46 will then be needed to refine both prognosis and therapy. Although the exact mechanisms accounting for the expression of the ERα46 isoform still remain to be clarified, three potential processes have been identified: (i) alternative splicing could generate an mRNA deficient in the nucleotide sequence corresponding to exon 1 encoding the A/B domain [14]; (ii) proteolysis, as has also been suggested in human breast tumors [15] and in the mouse uterus [48]; and (iii) an IRES located within the full length mRNA could allow the initiation of translation at a downstream ATG which encodes methionine 174 in the human ERα66 [16] (Fig. 7). Unfortunately, our proteomic approach did not identify peptides close to this initiation codon. One potential explanation for this is Fig. 7 Possible mechanisms to generate the estrogen receptor alpha (ERα)46 and epitope mapping of the antibodies used in this work. ERα46 can be generated either by alternative splicing of the first coding exon [16], by internal ribosomal entry site (IRES)-dependent translation from the full-length ESR1 transcript [14], or by proteolysis of ERα66 by as-yet unknown proteases [15]. It should be emphasized that these three different mechanisms could concur to generate the ERα46 isoform, which, to the best of our knowledge, is a unique feature among proteins. AF activation function, DBD DNA-binding domain, H Hinge domain, LBD ligand-binding domain the presence of lysine and arginine residues (target residues for trypsin) in the vicinity to the two potential initiation codons (KGSMAMESAKETRY). The length of the peptides generated after complete trypsin digestion may be too short to be identified despite the high dynamic range of the nano-LC-MS/MS system used for the proteomic analysis. At this stage, it is not possible to determine the respective roles of the different mechanisms of ERα46 generation. However, we provide evidence that an IRES-dependent alternative translational initiation under stress conditions could lead to the generation of ERα46. This is further supported by the association of the ERα mRNA, along with other IREScontaining mRNAs, to polysomes in apoptotic MCF7 cells in which cap-dependent translation is repressed [49]. Moreover, 4E-BP1, a negative regulator of cap-dependent mRNA translation, was found to be overexpressed in breast tumors compared to healthy epithelium, suggesting that translational mechanisms such as IRES might be active [50]. Interestingly, genes such as such as Apaf-1, DAP5, CHOP, p53, etc., that are also selectively translated by an IRES-driven mechanism, allow the cells to fine-tune their responses to cellular stress and, if conditions for cell survival are not restored, to proceed with final execution of apoptosis [51]. Although the significance of induction of ERα46 by cellular stress remains unknown, this isoform of ERα could be part of an orchestrated IRES-driven response, and contribute to slowing down of the proliferative response to E2. Conclusions This study demonstrates that a shorter ERα46 isoform previously ignored in diagnosis is frequently expressed in ERαpositive breast tumors, as revealed by Western blot analysis. Careful attention should therefore be taken in the choice of antibodies used for immunohistochemistry as several do not to detect the expression of the ERα46 isoform. We have demonstrated that this shorter isoform can differentially bind to coregulators in response to E2 which might modulate the transcriptional hormonal response. This highlights the potential importance of this shorter isoform in E2 signaling and its antiproliferative actions in breast cancer. We indeed found a clear inverse correlation between tumor size and ERα46 levels. Thus, due to the importance of ERα and hormonal treatments in the management of breast cancers, ERα46 expression should now be further studied. Additional file Additional file 1: Figure S1. Clinical parameters of the breast tumor samples. IDC invasive ductal carcinoma, ILC invasive lobular carcinoma. Figure S2. Expression of Flag-ERα36 in transiently transfected MDA-MB231 as detected by immunocytochemistry using an anti-Flag antibody. Figure S3. Analysis of potential correlation between the clinical parameters of the breast tumor samples and the expression of ERα46 isoform. A significant P value (indicated in red) was only found between ERα46 expression and HER-2 positive breast tumors. IDC invasive ductal carcinoma, ILC invasive Lobular Carcinoma. Figure S4. Results of the proteomic analysis of the ERα46 protein detected in tumor samples. A) Western blot with the SP1 antibody obtained after immunoprecipitation of ERα with HC20 antibody in two human tumors overexpressing the putative ERα46 isoform. B and C) Sequence coverage obtained from the peptides identified by proteomic analysis shown in bold red on ERα66 and on ERα46 isoforms M (methionine): putative translational start sites generating the ERα46 isoform. Figure S5. The stress-induced increase in LucF activity is reproducible in MCF7 cells (A) and is not due to the generation of mono-cistronic LucF transcripts via an internal promoter or cryptic splicing as observed in MDA-Lenti-AB exposed to two siRNAs-lucR (B and C). Figure S6. Modulation profiles of the interaction of ERα46 (red) and ERα66 (blue) with coregulators in A) Apo proteins and B) in response to E2 binding. C) Profile of EC 50 values of 4-OH-tamoxifen-(red) and fulvestrant-(blue) induced modulation of ERα46 and ERα66 coregulators interaction when use in antagonists mode with 6.3 nM E2. Figure S7. List of primers used in the expression profiling of target genes. Figure S8. Availability of data and materials The data concerning the tumor samples that support the findings of this study cannot be shared in a public repository according to the CNIL (Comité National Informatique et Libertés) guidelines. The other datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors' contributions FL was the principal investigator who conceived, coordinated, oversaw the study, and wrote the manuscript. JFA and CF helped in the study design and in writing the manuscript. EC, FB, and HL acquired, analyzed, interpreted the experiment data, and helped to revise the manuscript. AS and OBS acquired the data of the proteomics analysis and helped to revise the manuscript. RM
2017-08-03T02:19:46.421Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "42a1a150fc33b071850cab3ed571031813bdf403", "oa_license": "CCBY", "oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/s13058-016-0780-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62c9dad2999cd62842d21e8ec696508b13a2d3c9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17995177
pes2o/s2orc
v3-fos-license
A Review on Biodentine, a Contemporary Dentine Replacement and Repair Material Biodentine is a calcium-silicate based material that has drawn attention in recent years and has been advocated to be used in various clinical applications, such as root perforations, apexification, resorptions, retrograde fillings, pulp capping procedures, and dentine replacement. There has been considerable research performed on this material since its launching; however, there is scarce number of review articles that collates information and data obtained from these studies. Therefore, this review article was prepared to provide the reader with a general picture regarding the findings about various characteristics of the material. The results of a PubMed search were classified and presented along with some critical comments where necessary. The review initially focuses on various physical properties of the material with subheadings and continues with biocompatibility. Another section includes the review of studies on Biodentine as a vital pulp treatment material and the article is finalized with the summary of some case reports where the material has been used. Background Calcium silicate based materials have gained popularity in recent years due to their resemblance to mineral trioxide aggregate (MTA) and their applicability in cases where MTA is indicated. Although various calcium silicate based products have been launched to the market recently, one of these has especially been the focus of attention and the topic of a variety of investigations. This material is the "Biodentine" calcium silicate based product which became commercially available in 2009 (Septodont, http://www.septodontusa.com/) and that was specifically designed as a "dentine replacement" material. Biodentine has a wide range of applications including endodontic repair (root perforations, apexification, resorptive lesions, and retrograde filling material in endodontic surgery) and pulp capping and can be used as a dentine replacement material in restorative dentistry. The material is actually formulated using the MTA-based cement technology and the improvement of some properties of these types of cements, such as physical qualities and handling [1]. Since "Biodentine" has frequently been pronounced in recent literature and serves as an important representative of tricalcium silicate based cements, a review of the studies pertaining to its properties will be contributory in generating a clearer picture regarding the general characteristics of this frequently acknowledged material. This review article makes a general analysis, provides a summary of studies on Biodentine, and critically evaluates the existing knowledge regarding the properties of the product. A search was conducted in PubMed by inserting keywords "Biodentine, " "dentistry, " and "endodontic repair. " Articles were retrieved that were published since the launching of the material into the market and classified according to the topic that they focused on. A total of 52 papers were included that consisted of those directly focusing on Biodentine as well as relavant papers that do not include Biodentine but are related to dental materials in general. Composition. The product file of Biodentine states that the powder component of the material consists of tricalcium silicate, dicalcium silicate, calcium carbonate and oxide filler, iron oxide shade, and zirconium oxide. Tricalcium silicate and dicalcium silicate are indicated as main and second core materials, respectively, whereas zirconium oxide serves as a radiopacifier. The liquid, on the other hand, contains calcium chloride as an accelerator and a hydrosoluble polymer that serves as a water reducing agent. It has also been stated that fast setting time, one unique characteristics of the product, is achieved by increasing particle size, adding calcium chloride to the liquid component, and decreasing the liquid content. The setting period of the material is as short as 9-12 minutes. This shorter setting time is an improvement compared to other calcium silicate materials [1]. Some authors have indicated that there are few studies on the properties of newly developed materials such as Biodentine [2]. The material is characterized by the release of calcium when in solution [3,4]. Tricalcium silicate based materials are also defined as a source of hydroxyapatite when they are in contact with synthetic tissue fluid [5][6][7]. A search of the literature reveals a few studies that aim to further investigate the composition and setting characteristics of the material. Grech et al. [7] assessed the composition of materials and leachate of a prototype cement of tricalcium silicate and radiopacifier (without any additives) and 2 commercially available tricalcium silicate cements, one of which was Biodentine. Their main purpose was to assess the effect of the additives used in commercial brands. The authors characterized the hydrated cements using scanning electron microscopy (SEM) and X-ray energy dispersive analysis (EDX), X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FT-IR). They concluded that Biodentine resulted in the formation of calcium silicate hydrate and calcium and hydroxide, leached in solution. The materials, when hydrated, consisted of a cementitious phase, rich in calcium, silicone, and a radiopacifying material. Biodentine was further described as having calcium carbonate in powder and the carbonate phase of the material was verified by XRD and FT-IR analysis. The Biodentine powder also had inclusions of calcium carbonate which were relatively large compared to cement particles. There were hydration products around the circumference of the calcium carbonate particles. The authors added that calcium carbonate acts as a nucleation site, enhancing the microstructure [7]. Similar results were reported by Camilleri et al. [8] who compared the composition of Biodentine and MTA Angelus with experimentally produced laboratory cement consisting of tricalcium silicate and zirconium oxide. Their analysis also showed that tricalcium silicate was the main constituent of Biodentine and no dicalcium silicate or calcium oxide was detected. They further noted that Biodentine consisted of other additives for the enhancement of the material. Calcium carbonate was used as 15% in the powder component [8]. An important feature of the calcium carbonate additive was to act as a nucleation site for C-S-H, thereby reducing the duration of the induction period, leading to a faster setting time. The tricalcium silicate grains in Biodentine were also reported to be finer and calcium chloride and a water soluble polymer were included in the liquid portion [8]. Setting Time. Grech et al. [9] investigated the setting time of Biodentine using an indentation technique while the material was immersed in Hank's solution. The authors described that this methodology uses a Vicat apparatus with a needle of specific mass. The setting time of the mixture is calculated as the time taken from the start of mixing until the indentor fails to leave a mark on the set material surface. The setting time of Biodentine was determined as 45 minutes. This short setting time was attributed to the addition of calcium chloride to the mixing liquid [9]. Calcium chloride has also been shown to result in accelerated setting time for mineral trioxide aggregate [10]. An interesting finding of the study by Grech et al [9] was that the highest setting period was determined for Bioaggregate, another tricalcium silicate based material. The product sheet of Biodentine [1] indicates the setting time as 9 to 12 minutes, which is shorter than the one observed in the study by Grech et al. [9]. However, 9-12 minutes indicated in the product sheet is the initial setting time, whereas Grech et al. [9] evaluated the final setting time. Therefore, both papers are not comparable. Villat et al. [11] preferred a different methodology for the assessment of the setting time, the impedance spectroscopy that assesses the changes in electrical resistivity. Interestingly, impedance values were stabilized after 5 days for the glass ionomer cement while at least 14 days were necessary for the calcium silicate based cement. The authors speculated that this result was due to the higher porosity for Biodentine cement, characterizing higher capacity of ion exchanges between the material and its environment [11]. Compressive Strength. Compressive strength is considered as one of the main physical characteristics of hydraulic cements. Considering that a significant area of usage of products such as Biodentine is vital pulp therapies, it is essential that the cement has the capacity to withstand masticatory forces, in other words, sufficient compressive strength to resist external impacts [2]. The product sheet of Biodentine states that a specific feature of Biodentine is its capacity to continue improving in terms of compressive strength with time until reaching a similar range with natural dentine [1]. In the study by Grech et al. [9], Biodentine showed the highest compressive strength compared to the other tested materials. The authors attributed this result to the enhanced strength due to the low water/cement ratio used in Biodentine. They stated that this mode of the material is permissible as a water soluble polymer is added to the mixing liquid. Kayahan et al. [2] evaluated the compressive strength from another perspective and drew conclusions specifically pertaining to clinical usage. Considering that acid etching is one of the steps following the application of Biodentine for the provision of mechanical adhesion, the authors aimed to assess whether any alterations exist in terms of compressive strength following the etching procedure. They concluded that acid etching procedures after 7 days did not reduce the compressive strength of ProRoot MTA and Biodentine [2]. Although these studies are limited and further research is definitely warranted; they hold promise for Biodentine as a suitable material for use in procedures, such as vital pulp therapies, where there is direct exposure to external masticatory forces and compressive strength capacity is of primary significance. Furthermore, in a study by Koubi et al. [12], Biodentine was used as a posterior restoration and revealed favorable surface properties such as good marginal adaptation until 6 months. Microhardness. Grech et al. [9] evaluated the microhardness of the material using a diamond shaped indenter. Their results showed that Biodentine displayed superior values compared to Bioaggregate and IRM. Camilleri [13], in a study comparing the physical properties of Biodentine with a conventional glass ionomer (Fuji IX) and a resin modified glass ionomer (Vitrebond), showed that Biodentine exhibited higher surface microhardness compared to the other materials when unetched. On the other hand, there was no difference in the microhardness of different materials when they were etched [13]. Bond Strength. Considering that Biodentine is recommended for use as a dentine substitute under permanent restorations, studies were performed that assess the bond strength of the material with different bonding systems. Odabaş et al. [14] evaluated the shear bond strength of an etch-and-rinse adhesive, a 2-step self-etch adhesive and a 1-step self-etch adhesive system to Biodentine at different intervals. No significant differences were found between all of the adhesive groups at the same time intervals (12 minutes and 24 hours). When different time intervals were compared, the lowest bonding value was obtained for the etch-andrinse adhesive at a 12-minute period, whereas the highest was obtained for the 2-step self-etch adhesive at the 24-hour period [14]. Another area of use of Biodentine, specifically from an endodontic point of view, is the repair of perforations, which is likely to be encountered in clinical practice. It is essential that a perforation repair material should have sufficient amount of push-out bond strength with dentinal walls for the prevention of dislodgement from the repair site. Aggarwal et al. [15] studied the push-out bond strengths of Biodentine, ProRoot MTA, and MTA Plus in furcal perforation repairs. Push-out bond strength increased with time. Their results showed that the 24 h push-out strength of MTA was less than that of Biodentine and blood contamination affected the push-out bond strength of MTA Plus irrespective of the setting time. A favorable feature of Biodentine determined by the authors was that blood contamination had no effect on the push-out bond strength, irrespective of the duration of setting time [15]. El-Ma'aita et al. [16] aimed to assess the effect of smear layer on the push-out bond strength of calcium silicate cements and whether the removal of this layer would have any overall influence on the bonding characteristics of these materials. The authors used Biodentine, ProRoot MTA, and Harvard MTA as root fillings. The results showed that the removal of the smear layer significantly reduced the pushout bond strengths of calcium silicate cements and the smear layer was a critical issue that determines the bond strength between dentine and calcium silicate cements such as Biodentine. The authors attributed this result to the inability of calcium silicate cement particles to penetrate the dentinal tubules due to their particle size. They speculated that the smear layer is important in the formation of the interfacial layer and may be involved in the mineral interaction between the CSC and radicular dentin. It is appropriate to mention that it is not customary to use calcium silicate based materials for the obturation of the entire root canal system and such an approach might not be preferable especially in narrow and curved root canals. On the other hand, the study by El-Ma'aita et al. [16] is significant because it successfully demonstrated the bonding characteristics of these popular materials which are unique in contemporary dental applications. Hashem et al. [17], in a recently published report, drew attention to another issue in terms of bond strength characteristics of Biodentine with overlying materials that was not mentioned previously. Biodentine is a weak restorative material in its early setting phase. The authors advocated that, in case of a laminate/layered definitive restoration, the placement of the overlying resin composite must be delayed for more than 2 weeks so that Biodentine material will undergo adequate maturation to withstand contraction forces from the resin composite [17]. In a study by Guneser et al. [18], Biodentine showed considerable performance as a repair material even after being exposed to various endodontic irrigation solutions, such as NaOCl, chlorhexidine, and saline, whereas MTA had the lowest push-out bond strength to root dentin. Porosity and Material-Dentine Interface Analysis. Tricalcium silicate based materials are especially indicated in cases such as perforation repair, vital pulp treatments, and retrograde fillings where a hermetic sealing is mandatory. Therefore, the degree of porosity plays a very important role in the overall success of treatments performed using these materials, because it is critical factor that determines the amount of leakage. Porosity has been shown to have an impact upon numerous other factors including adsorption, permeability, strength, and density. It has further been stated that the maximum pore diameter, which corresponds to the largest leak in the sample, along with bacterial size and their metabolites, will be indicative of the leakage that occurs along the root-end filling materials [19]. Camilleri et al. [20] evaluated the root dentine to material interface of Bioaggregate, Biodentine, a prototype radiopacified tricalcium silicate cement (TCS-20-Zr) and intermediate restorative material (IRM) when used as root-end filling materials in extracted human teeth after 28 days of dry storage and immersion in HBSS using a confocal microscope together with fluorescent tracers and also a field emission gun scanning electron microscope. They used a prototype material (TCS-20-Zr) similar to Biodentine in composition which was composed of a cementitious phase, namely, tricalcium silicate and a radiopacifier (zirconium oxide) with no additives. The reason for testing such a prototype material was to assess the effects that the additives in Biodentine have on the porosity of the material and to detect any changes in the material characteristics at the root-dentine to material interface. The testing was performed in two environmental conditions, namely, dry or immersed in a physiological solution. According to their results, Biodentine and IRM exhibited the lowest level or degree of porosity. The confocal microscopy used in conjunction with fluorescent tracers demonstrated that dry storage resulted in gaps at the root dentine to material interface and also cracks in the material and Biodentine was the most affected one from ambient conditions. Dry storage of Biodentine caused changes in the material microstructure and cracks at the root dentine to Biodentine interface. Furthermore, the gaps occurring due to material shrinkage allowed the passage of the fluorescent microspheres. These gaps were defined as significant as they had the potential to allow the ingress and transmission of microorganisms [20]. The authors' results were significant from a clinical standpoint because it can be interpreted from the results of the study that the type of treatment performed is a critical factor that determines the porosity and subsequent leakage occurring thereafter. In case the procedure is a retrograde filling where there is a continuously moist environment, lesser porosity that occurs by Biodentine is advantageous. However, in procedures such as liners, bases, or dentine replacement, the material is generally kept dry which might pose a problem in terms of porosity and result in the formation of gaps at the interface, leading to bacterial passage. This leads to the conclusion that caution must be exercised during the selection of Biodentine in certain clinical conditions where moisture is not necessarily present. Another study on porosity was one by De Souza et al. [21] where Biodentine was compared to other silicate based cements, IRoot BP Plus, Ceramicrete, and ProRoot MTA using micro-CT characterization. The authors indicated that no significant difference in porosity between IRoot BP PlusVR, BiodentineVR, and Ceramicrete were observed. In addition, no significant differences were found in porosity between the new calcium silicate-containing repair cements and the gold-standard MTA. The authors made similar conclusions in terms of the behavior of tricalcium based materials and drew similarity between them and the conventional MTA in terms of microleakage, solubility, and microfractures in the clinical setting [21]. Gjorgievska et al. [22] conducted a study where the interfacial properties of 3 different bioactive dental substitutes were compared, one of which was Biodentine. Whilst the cavity adaptation of bioglass was poor owing to its particle size, both glass ionomers and calcium silicate cements yielded favorable results as dentine substitutes. During SEM analysis, Biodentine crystals appeared firmly attached to the underlying dentine surface. The authors further emphasized the resemblance of the interfacial layer formed between Biodentine and dentine to the hard tissue layer formed by ProRoot MTA and further drew attention to the hydroxyapatite crystal growth. Also, the Biodentine crystals appeared to be firmly attached to the underlying dentine surface. Although they found no evidence of ionic exchange, they concluded that the excellent adaptability of this material to the underlying dentine is dependent on mainly micromechanical adhesion [22]. Atmeh et al. [23] studied the interfacial properties of Biodentine and glass ionomer cement by different microscopy and spectroscopy methods and determined the existence of interfacial tag-like structures along the dentine. The alkaline caustic effect of hydration products degraded the collagenous component of dentine next to Biodentine. This altered dentine structure was only observed beneath the Biodentine samples. Radiopacity. Radiopacity is an important property expected from a retrograde or repair material as these materials are generally applied in low thicknesses and they need to be easily discerned from surrounding tissues. The ISO 6876:2001 has established 3 mm Al as the minimum radiopacity value for endodontic cements [24]. Meanwhile, according to ANSI/ADA specification number 57, all endodontic sealers should be at least 2 mm Al more radiopaque than dentin or bone [25]. For the determination of the radiopacities of filling materials, the method developed by Tagger and Katz [26] is generally used where radiographic images of the material are taken alongside an aluminium step-wedge. Zirconium oxide is used as a radiopacifier in Biodentine contrary to other materials where bismuth oxide is preferred as a radiopacifier. The reason for such a preference might be due to some study results which show that zirconium oxide possesses biocompatible characteristics and is indicated as a bioinert material with favorable mechanical properties and resistance to corrosion [27]. Grech et al. [9] in a study evaluating the prototype radiopacified tricalcium silicate cement, Bioaggregate, and Biodentine, concluded that all materials had radiopacity values greater than 3 mm Al. Similar results were obtained by Camilleri et al. [8]. On the other hand, a clinical observation stated that the radiopacity of Biodentine is in the region of dentin and the cement is not adequately visible in the radiograph. This posed difficulty in terms of practical applications [28]. This subjective comment was further supported in a study by Tanalp et al. [29] where the radiopacity of Biodentine was found to be lower compared to other repair materials tested (MM-MTA, and MTA Angelus) and slightly lower than the 3 mm Al baseline value set by ISO. Though these results should be interpreted with caution as experimentation conditions, preservation periods and other factors might affect the results of radiopacity studies, they also indicated that radiopacity quality might need to be further investigated. Another consideration should be made based on the clinical scenarios where Biodentine is intended to be used. In cases where there is direct contact with the surrounding connective tissue, biocompatibility is of primary significance. Though the confirmation of adequate placement of the material is important in such cases by relying on the radiopacity value, one can prefer to make a judgement by clinical observation in case the usage of additives to obtain high radiopacity value is likely to compromise the overall biocompatibility. Solubility. Grech et al. [9] demonstrated negative solubility values for a prototype cement, Bioaggregate, and Biodentine, in a study assessing the physical properties of the materials. They attributed this result to the deposition of substances such as hydroxyapatite on the material surface when in contact with synthetic tissue fluids. This property is rather favorable as they indicate that the material does not lose particulate matter to result in dimensional instability. Effect on the Flexural Properties of Dentine. An important issue related to the usage of calcium silicate based materials is their release of calcium hydroxide on surface hydrolysis of their calcium silicate components [3]. On the other hand, it has also been indicated that prolonged contact of root dentine with calcium hydroxide as well as MTA has detrimental and weakening effects on the resistance of root dentine [30,31]. Therefore, it is critical to consider the effects of released calcium hydroxide on dentine collagen, specifically in procedures where there is a permanent contact of dentine with calcium silicate based materials. Sawyer et al. [32] evaluated whether prolonged contact of dentine with calcium silicate based sealers would have any influence on its mechanical properties. According to the results of their study where they compared Biodentine with MTA Plus, they determined that both materials altered the strength and stiffness of the dentine tissue after aging in 100% humidity. They suggested that though dentine's ability to withstand external impacts and resistance to external forces might not be affected to a critical extent when used in very thin layers such as pulp capping material or as an apical plug, careful consideration is necessary when obturating the entire root canal system with these materials or when using them for the purpose of dentine replacement [32]. Microleakage. When specifically used as a liner or base material, leakage of Biodentine should especially be considered as leakage may result in postoperative sensitivity and secondary caries, leading to the failure of the treatment. Koubi et al. [33] were the first to assess the in vitro marginal integrity of open-sandwich restorations based on aged calcium silicate cement and resin-modified glass ionomer cement. Results of glucose filtration analysis after one-year aging showed that both materials displayed similar leakage patterns and Biodentine performed as well as the resin modified glass ionomer cement. Another significant property of Biodentine was that it did not require specific preparation of the dentin walls. They explained the good marginal integrity of Biodentine with the ability of calcium silicate materials to form hydroxyapatite crystals at the surface. These crystals might have the potential to increase the sealing ability, especially when formed at the interface of the material with dentinal walls. Furthermore, the interaction between the phosphate ions of saliva and the calcium silicate based cements might lead to the formation of apatite deposits, thereby increasing the sealing potential of the material. The authors additionally expressed the nanostructure and small size of the forming gel of the calcium silicate cement as one of the factors that influenced the sealability as this texture allowed the material to better spread onto the surface of the dentine. Slight expansion was also noted for these materials which contributed to their better adaptation [33]. Another study comparing the leakage of Biodentine with a resin modified glass ionomer (Fuji II LC) was one by Raskin et al. [34] where silver penetration was evaluated in cervical lining restorations. Similar results were reported to those by Koubi et al. [33] and Biodentine as a dentine substitute in cervical lining restorations or as a restorative material in approximal cavities, when cervical extent was under CEJ, appeared to perform well without any conditioning. The only disadvantage was related to the operating time that was determined to be longer than the resin modified glass ionomer [33]. A contradictory report was by Camilleri et al. [13] in a study comparing the physical properties of Biodentine with a conventional (Fuji IX) and resin modified glass ionomer (Vitrebond). When used as a dentin replacement material in the sandwich technique overlaid with composite, significant leakage occurred at the dentine to material interface. On the other hand, materials based on glass ionomer cement displayed no chemical and physical changes or microleakage when the materials were used as bases under composite restorations [13]. Though the contradictory statement could be due to the methodology used for the detection of leakage, further studies are warranted to clarify the leakage occurring with calcium silicate based materials. 2.11. Discoloration. One study evaluated Biodentine from this perspective where Biodentine, along with 4 different materials, was exposed to different oxygen and light conditions and spectrophotometric analysis was performed at different periods until 5 days [35]. Favorable results were obtained for Portland Cement (PC) and Biodentine and these 2 materials demonstrated color stability over a period of 5 days. Based on their results, the authors suggested that Biodentine could serve as an alternative for use under light-cured restorative materials in areas that are esthetically sensitive [35]. Wash-Out Resistance. Washout of a material is defined as the tendency of freshly prepared cement paste to disintegrate upon early contact with fluids such as blood or other fluids. The results of the available study on these characteristics of Biodentine did not reveal favorable results as the material demonstrated a high washout with every drop used in the methodology [9]. The authors attributed this result to the surfactant effect water soluble polymer added to the material to reduce the water/cement ratio [9]. Biocompatibility of Biodentine Biocompatibility of a dental material is a major factor that should be taken into consideration specifically when it is used in pulp capping, perforation repair or as a retrograde filling. During the aforementioned procedures, the material is in direct contact with the connective tissue and has the potential to affect the viability of periradicular and pulpal cells. Cell death under these circumstances occurs due to apoptosis or necrosis [36]. Therefore, it is essential that toxic materials are avoided and materials promoting repair or that are biologically neutral are preferred during procedures in which the material is directly in contact with the surrounding tissue. Though the information accumulated so far regarding the biocompatibility of Biodentine is rather limited, the available data generally is in favor of the material in terms of its lack of cytotoxicity and tissue acceptability. Han and Okiji [37]compared Biodentine and white ProRoot MTA in terms of Ca and Si uptake by adjacent root canal dentine and observed that both materials formed tag-like structures. They observed that dentine element uptake was more prominent for Biodentine than MTA. The same authors [38] in another study also showed the formation of tag-like structures composed of Ca and P-rich and Si-poor materials. They also determined a high Ca release for Biodentine. Laurent et al. [39] were the first to show the promising biological properties of Biodentine on human fibroblast cultures. In another study by Laurent et al. [40] Biodentine was found to significantly increase TGF-B1 secretion from pulp cells. TGF is a growth factor whose role in angiogenesis, recruitment of progenitor cells, cell differentiation, and mineralization has been highlighted in recent research [40]. In a study performed by Zhou et al. [36], where Biodentine was compared with white MTA (ProRoot) and glass ionomer cement (FujiIX) using human fibroblasts, both white MTA and Biodentine were found to be less toxic compared to glass ionomer during the 1-and 7-day observation period. The authors commented that despite the uneven and crystalline surface topography of both Biodentine and MTA compared to the smooth surface texture of the glass ionomer, cell adhesion and growth were determined to be more favorable in the aforementioned materials compared to glass ionomer. They attributed this to the possible leaching of substances from glass ionomer that adversely affect interactions with the material. On the other hand, in a longer incubation period, surviving cells could overcome the cytotoxic effect of glass ionomer [36]. Another study comparing the biocompatibility and gene expression ability of Biodentine and MTA was one by Pérard et al. [41]. Based on the standpoint that three-dimensional (3D) multicellular spheroid cultures are currently considered to be the in vitro model providing the most realistic simulation of the human tissue environment, they performed a biocompatibility investigation using this type of modelling. Biodentine and MTA were determined to modify the proliferation of pulp cell lines. They observed similarity between Biodentine and MTA validating the indication of these 2 materials for direct pulp-capping as suggested by manufacturers [41]. A recently published article focused on the influence of Biodentine from another perspective and assessed the proliferative, migratory, and adhesion effect of different concentrations of the material on human dental pulp stem cells (hDPSCs) obtained from impacted third molars [42]. Results showed increased proliferation of stem cells at 0.2 and 2 mg/mL concentrations while the cellular activity decreased significantly at higher concentration of 20 mg/mL. Biodentine favorably affected healing when placed directly in contact with the pulp by enhancing the proliferation, migration, and adhesion of human dental pulp stem cells, confirming the bioactive and biocompatible characteristics of the material [42]. Biodentine as a Vital Pulp Treatment Material When materials' influences are to be evaluated in terms of pulpal response during vital procedures, in vivo study designs are helpful and animal and human teeth are generally preferred to demonstrate the effects of pulp capping agents. These should further be supported by clinical trials to establish a clear picture regarding the general characteristics of the materials. MTA, which is generally considered a gold standard, has been investigated in various human and animal experimental models. On the other hand, studies comparing MTA with Biodentine in terms of vital pulp treatment behavior are rather limited. The first study to demonstrate the induction of effective dentinal repair was the one by Tran et al. [43] where the material was applied directly on mechanically exposed rat pulps. In their study where Biodentine was compared to MTA and calcium hydroxide in terms of reparative dentine bridge formation, they noted that the structure induced by Ca(OH) 2 contained several cell inclusions, also called tunnel defects as previously reported by Cox et al. in 1996 [44]. These defective regions were regarded as undesirable areas facilitating the migration of the microorganisms towards the pulp and predisposing the tooth to an endodontic infection. On the contrary, the dentine bridge formation induced by Biodentine showed a pattern well-localized at the injury site unlike that caused by calcium hydroxide that exhibited an expanding structure in the pulp chamber. The quality of the formed dentine was also much more favorable compared to calcium hydroxide and an orthodentin organization was noted in which dentine tubules could be clearly visualized. Moreover, cells secreting the structure well exhibited DSP expression as well as osteopontin expression, which are critical regulators of reparative dentine formation [44]. An interesting clinical and histological study performed on molars to be extracted for orthodontic reasons showed that Biodentine had a similar efficacy to MTA in clinical setting and may well be regarded as an alternative for pulp capping procedures. Complete dentinal bridge formation and absence of an inflammatory response were observed as major findings [45]. Pulpotomy is another vital pulp treatment method in which Biodentine is advocated to be used. This method is widely used in pediatric dentistry and involves the amputation of pulp chamber and the placement of a material for the preservation of the radicular pulp tissue's vitality. This methodology is specifically useful and preferred when the coronal pulp tissue is inflamed and a direct pulp capping is not a suitable option. Shayegan et al. [46] performed a study in which they assessed the pulpal response of primary pig teeth against Biodentine when used as a pulp capping as well as a pulpotomy material after 7,28 and 90 days. Their results showed that Biodentine has bioactive properties, encourages hard tissue regeneration, and provoke no signs of moderate or severe pulp inflammation response. They further noted that the material had the ability to maintain a successful marginal integrity due to the formation of hydroxyapatite crystals at the surface which enhances the sealing ability. Due to its superior sealing potential, there is no risk of microleakage which may cause the pulp to become infected or necrotic and jeopardize the success of vital treatment procedures. Another important comment was that the hard tissue formation due to calcium hydroxide was rather a defense response of the pulp against the irritant nature of the material whereas calcium silicate based materials are compatible with the cell recruitment. Furthermore, the necrotic layer caused by calcium hydroxide appeared to be much larger compared to others [46]. Zanini et al. [47] also evaluated the biological effect of Biodentine on murine pulp cells by analysing the expression of several biomolecular markers after culturing OD-21 cells with or without Biodentine. Their results, consistent with other studies, were in favor of Biodentine, which was found to be bioactive due to its ability to increase OD-21 cell proliferation and biomineralization. Laurent et al. [40] indicated that though the interactions between pulp capping materials and the injured pulp tissue are yet unclear, there is growing evidence on the role of growth factors, with TGF-1 being the most important one. These factors' main role is the signalling of reparative dentinogenesis. In a recently published article, they assessed the reparative dentin synthesis capacity of Biodentine as well as the ability to modulate TGF-1 secretion by pulp cells which has previously shown to be released from dentine by calcium hydroxide [48,49]. Using an entire human tooth culture model, they showed that, upon application on the exposed pulp, Biodentine had the potential to significantly increase TGF-1 secretion from pulp cells and induce an early form of reparative dentin synthesis [40]. In addition to the aforementioned favorable biological results, supportive statements were made by Marijana et al. [50], who concluded the therapeutic effects of Biodentine after pulp capping in Vietnamese pigs and the resemblance of the pulp reaction to that caused by ProRoot MTA. Case Reports Where Biodentine Is Used A survey of the available literature shows that there are yet few case reports published that include the usage of Biodentine. However, all articles retrieved display the material as a favorable and promising alternative for clinical applications. Villat et al. [51] performed a partial pulpotomy in an immature second right premolar of a 12-year-old patient whom they followed up until 6 months. The authors detected a fast tissue response radiologically evident by the dentine bridge formation and continuation of root development in the short term. Furthermore, no pain or complaints were reported by the patient along the observation period. They commented that increased speed of pulpal response as well as more homogeneous dentine bridge formation render this material a suitable choice compared to calcium hydroxide [51]. [12] One report has been retrieved in which the use of Biodentine has been assessed as a retrograde material [52]. It describes the management of a large periapical lesion associated with the maxillary right central and lateral incisors of a 24-year-old patient who had a history of previous traumatic injury. Following the use of Biodentine as a retrograde material during apical surgery, the patient was followed for a period of 18 months, during which progressive periapical healing was evident [52]. Although case reports are definitely important resources of confirming a material's suitability for clinical usage, it is undeniable that more reliable results can be achieved through randomized long-term clinical trials. Accumulation of data of long-term clinical trials after a prolonged period might lead to gathering of evidence based data; such has been for mineral trioxide aggregate. Though the chemical characteristics and general features of Biodentine are similar, it is clear that a specific number of clinical trials should be conducted before definite conclusions can be drawn. So far, there is one 3-year clinical trial where Biodentine has been used and in which the material has been assessed in terms of various parameters such as marginal adaptation, interproximal contact, surface roughness, and postoperative pain [12]. Biodentine was found to show favorable clinical performance until a period of 6 months though the other test material (Z100) displayed better scores in terms of anatomical form, marginal adaptation, and proximal contact. After a 1-year period, the authors carried on the investigation by the addition of Z-100 over Biodentine by the sandwich technique which resulted in very satisfactory treatment performance. They concluded that Biodentine is a well-tolerated dentine substitute for posterior teeth for up to 6 months during which abrasion is the main degradation process. No discoloration is noted and the material has even yielded superior results compared to Z-100 in terms of this property. In general, Biodentine was advocated to be used under composite in posterior restorations, supporting the major standpoint from which the material was initially developed, in other words as a dentine replacement material [12]. The summarized reports are the only available clinical information published so far and it is presumed that, as more clinical data is released about Biodentine, the clinician will be able to make a more sound and reliable decision regarding its usage. Conclusions Biodentine, a popular and contemporary tricalcium silicate based dentine replacement and repair material, has been evaluated in quite a number of aspects ever since its launching in 2009 (Table 1). The studies are generally in favor of this product in terms of physical and clinical aspects despite a few contradictory reports. Though accumulation of further data is necessary, Biodentine holds promise for clinical dental procedures as a biocompatible and easily handled product with short setting time. As more research is performed regarding this interesting alternative to MTA, we will be provided with more reliable data and more confidently implement Biodentine into routine clinical applications.
2018-04-03T02:00:16.474Z
2014-06-16T00:00:00.000
{ "year": 2014, "sha1": "cbaed61f1f8ef683001f12fa115369e5c8540d91", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/160951.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e16ed19a46f4997ade3740c8a28bf545a0adafff", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
54488835
pes2o/s2orc
v3-fos-license
The anti-apoptotic BAG3 protein is involved in BRAF inhibitor resistance in melanoma cells BAG3 protein, a member of BAG family of co-chaperones, has a pro-survival role in several tumour types. BAG3 anti-apoptotic properties rely on its characteristic to bind several intracellular partners, thereby modulating crucial events such as apoptosis, differentiation, cell motility, and autophagy. In human melanomas, BAG3 positivity is correlated with the aggressiveness of the tumour cells and can sustain IKK-γ levels, allowing a sustained activation of NF-κB. Furthermore, BAG3 is able to modulate BRAFV600E levels and activity in thyroid carcinomas. BRAFV600E is the most frequent mutation detected in malignant melanomas and is targeted by Vemurafenib, a specific inhibitor found to be effective in the treatment of advanced melanoma. However, patients with BRAF-mutated melanoma may result insensitive ab initio or, mostly, develop acquired resistance to the treatment with this molecule. Here we show that BAG3 down-modulation interferes with BRAF levels in melanoma cells and sensitizes them to Vemurafenib treatment. Furthermore, the down-modulation of BAG3 protein in an in vitro model of acquired resistance to Vemurafenib can induce sensitization to the BRAFV600E specific inhibition by interfering with BRAF pathway through reduction of ERK phosphorylation, but also on parallel survival pathways. Future studies on BAG3 molecular interactions with key proteins responsible of acquired BRAF inhibitor resistance may represent a promising field for novel multi-drugs treatment design. INTRODUCTION Melanoma incidence is steadily increasing worldwide and people affected by the metastatic form of this malignancy had a median survival time of 6-8 months [1]. In the last ten years, the discovery of BRAF mutations in melanoma created the first opportunity to develop oncogene-directed therapy, which had produced major clinical responses and significantly improved survivals [2,3,4]. Although the outstanding results on patients give hopes that melanoma can be cured, achievement of prolonged survivals is hampered by the appearance of resistance mechanisms that may quickly develop and lead to relapse in patients treated with BRAF inhibitors [5,6]. Recently, clinical evidence of higher effectiveness of the combinatorial trials using BRAF inhibitors together with MEK and, to a less extent, PI3K inhibitors is providing further treatment options [7]. However, the issue of acquired resistance to BRAF inhibitor still remains a challenge [6]. Indeed, what is needed is a progress in understanding the multiple coexistent aberrations in resistant melanoma cells and Research Paper Oncotarget 80394 www.impactjournals.com/oncotarget addressing novel multi-target therapeutic modules to narrow the propensity for growth and spreading of resistant tumours. BAG3 protein, a member of the family of heat shock protein (HSP) 70 co-chaperones that share the BAG domain, is expressed in a wide range of human tumours; in physiological conditions, its expression is conversely narrowed to few cell types (such as myocytes) [8,9]. Recently, we reported that BAG3 levels in melanomas appeared to be specifically expressed in the cytoplasm of neoplastic cells while normal skin and benign nevi were negative [10]. More recently, we identified a subgroup of stage III melanoma patients, i.e. patients with 2-3 positive lymph nodes, whose clinical behaviour is influenced by the expression of the anti-apoptotic BAG3 protein in lymph node metastasis, suggesting that BAG3 staining on lymph node biopsies could therefore contribute to patient's prognosis and stratification for specific therapeutic approaches [11]. To BAG3 protein was assigned a role in sustaining the growth and in contributing to chemotherapy resistance in some tumour types [12,13,14,15,16]. We also demonstrated that, in melanoma cells, BAG3 is able to modulate the Hsp70-mediated delivery of the IKKγ subunit of IKK complex to proteasome, thereby sustaining NF-κB activation and inhibiting cell apoptosis. In a melanoma xenograft model, bag3 silencing indeed resulted in a significant reduction of tumour growth with subsequent prolonged animal's survival [17]. Furthermore, it was reported that in thyroid cancer cells (harbouring BRAFV600E mutation) BAG3 can regulate cell growth both in vitro and in vivo and the underlying molecular mechanism appears to rely on BAG3 binding to BRAF, that protects BRAF from proteasome-dependent degradation [18]. Toward the elucidation of mechanisms by which resistance develops in treatment-resistant melanomas, we think that a contribution to this issue may be provided by the assessment of the role of BAG3 in response to therapy in melanoma cells. This in turn will lead to a rational basis for combination strategies that will include BAG3 silencing/inhibition aimed at circumventing resistance. RESULTS BAG3 protein is highly expressed in melanoma metastasis carrying BRAFV600E mutation and sustains BRAFV600E levels in A375 melanoma cells BAG3 protein has been described for its antiapoptotic role in melanoma cells [17] and its expression in melanoma metastatic lymph nodes was correlated to the aggressiveness of the tumour [11]. These pieces of evidence prompted us to deeper analyse a possible involvement of BAG3 in melanoma tumour development. To this end, we analysed BAG3 expression in a series of tissue samples from tumours and metastasis coming from 41 patients with advanced malignant melanoma, by immunohistochemistry (IHC), using an anti-BAG3 monoclonal antibody (AC-1). Intensity and distribution of immunostaining was used to assign to the BAG3 signal a score from 0 to 2. In particular, tumour tissue samples showing high positivity were classified with a score 2; those samples were characterized by a strong to moderate staining and a homogeneous distribution of positivity within tumour cells. Conversely, scores 1 or 0 were assigned when the BAG3 immunostaining was weak or absent, respectively ( Figure 1A). In our series, we identified a subgroup composed by 26 patients for whom we had information about BAG3 staining in primary tumours and metastasis. As shown in Figure 1B, our analysis revealed that BAG3 expression is significantly enhanced in metastatic lesions as compared to primary tumours in this subgroup of patients. Indeed, more than the 55% of patients' metastasis were classified with score 2 while only 10% resulted negative. (Fisher exact test p = 0.0001). Furthermore, in 10 of these patients we observed BAG3 positivity within the tumour tissue increased in metastatic sample in respect to the primary tumour. These data suggest a potential role of the antiapoptotic BAG3 protein in maintaining metastatic melanoma cell survival and in sustaining tumour development. In melanoma disease, approximately 50-60% of tumours contain a mutation in the gene that encodes BRAF that leads to constitutive activation of downstream signaling in the MAP kinase pathway. In a previous report [10], we did not observe any significant changes in BAG3 positivity distribution between BRAF WT and BRAF mutated melanoma samples of primary tumours. In our series we obtained similar results (data not shown), and in addition, we analysed BAG3 expression in metastatic samples of 21 patients carrying BRAFV600E mutation compared to that of 8 patients with a wild-type BRAF gene. We observed that high BAG3 expression appears to be significantly more frequent in BRAF mutated metastatic specimens as shown in Figure 1C (Fisher exact test p = 0.0022). This result demonstrated that BAG3 protein is highly expressed in melanoma metastatic cells carrying BRAFV600E mutation. Evidences from a recent report have demonstrated that BAG3 protein sustains anaplastic thyroid carcinoma (ATC) growth either in vitro or in vivo. The molecular mechanism relies on BAG3 binding to BRAF, protecting the latter from proteasome-dependent degradation mediated by Hsp-70 [18]. Analogously to cutaneous malignant melanoma, ATC is characterized by the BRAFV600E mutation activating the kinase domain, which in turn strongly sustains the proliferative and oncogenic characteristics of these human tumour cells, mainly via ERK kinase [19]. Thus, we firstly investigated Oncotarget 80395 www.impactjournals.com/oncotarget if BAG3 and BRAF protein interacts in A375 melanoma cells that harbour BRAFV600E mutation. To verify BAG3/ BRAF interaction we performed a co-IP experiment and we found that BRAF protein was co-immunoprecipitated with BAG3 using a BAG3-specific antibody. Moreover, Hsp70 was found to co-immunoprecipitate with both BAG3 and BRAF ( Figure 1D), as reported for ATC [18]. Next, we sought to determine if BAG3 silencing could accelerate BRAF degradation, thus lowering its total levels in the cells. As illustrated in Figure 1E, cell treatment with bag3siRNA resulted in reduction of BRAF intracellular levels, compared with BRAF levels in control and NT siRNA-treated cells and interestingly reduction of BRAF levels resulted in the lack of ERK protein phosphorylation. Those pieces of evidences suggest that BAG3 protein is involved in one of the major mechanism that sustain melanoma cells growth id est the axis BRAF/MEK/ERK. A375 melanoma cells resistance to BRAF inhibitor Vemurafenib is overcome by BAG3 silencing In recent years, specific inhibitors of BRAF/ MEK/ERK pathway alone or in combination have been used in patients with advanced melanoma and, although resistance to a combined therapy with BRAF inhibitor and MEK inhibitor resulted in a prolonged patients' survival compared to treatment with the single agents, resistance remains a significant problem [20]. Mechanisms of cancer cells resistance to Vemurafenib can be established through two major mechanisms: ERK signaling activation in presence of the BRAF inhibitor and activation of parallel pro-survival pathways [7]. The first one is very frequently due to BRAF amplification, NRAS mutations, BRAF splice variants, all promoting dimerization of mutant BRAF with CRAF or wild type BRAF [21]. The latter refers to activation of PI3K/mTOR signaling cascade. It was previously demonstrated that BAG3 inactivation or overexpression results in induction or inhibition, respectively, of both the spontaneous and chemotherapyinduced apoptosis [8]. As BAG3 protein is involved in sustaining BRAF levels and ERK phosphorylation in A375 melanoma cells, we sought to verify if BAG3 silencing can affect the response of those cells to prolonged Vemurafenib treatment. To this end, A375 cells were transiently transfected with bag3siRNA or non-targeting siRNA (NT-siRNA) at a final concentration of 200 nM and treated them with 2 μM Vemurafenib for 120 hours. As shown in Figure 2A, the levels of hypo-diploid nuclei induced by Vemurafenib were significantly increased in cells where BAG3 was silenced. We also analysed in the same experimental settings cleaved caspase-3/7 contents and we confirmed that interfering on BAG3 protein expression sensitizes to apoptosis melanoma cells subjected to a prolonged treatment with Vemurafenib ( Figure 2B). BRAF mutated protein has a basal kinase activity 10 fold higher than BRAF wild type counterpart, resulting in hyper-activation of the MEK/ERK pathways [22]. Interestingly, extended Vemurafenib treatment induced a rebound of phospho-ERK1/2 as a first sign of resistance establishment [23]. We wanted to verify levels of BRAF end phospho-ERK1/2 in BAG3 silenced cells after this prolonged treatment with Vemurafenib (2 μM for 120 hours). As shown in Figure 2C we observed that BAG3 down-modulation during BRAF inhibitor treatment resulted in the loss of ERK phosphorylation thus suggesting that BAG3 can sustain ERK activation in presence of BRAF inhibitor. Interestingly, also BRAF levels, as observed in untreated cells, resulted to be downmodulated suggesting that resistance pathways due to BRAF gene gain or amplification could be sensitive to BAG3 silencing. This aberration was detected in patients treated with BRAF inhibitor alone but also in that treated in combination with a MEK inhibitor [24]. Furthermore, it was previously demonstrated that different levels of expression of the BRAFV600E modulate melanoma sensitivity to Vemurafenib [25]. In order to better analyse the role of BAG3 protein in Vemurafenib-resistant cells we cultured A375 cells with increasing concentrations (0.02 to 2 μM) of the BRAF inhibitor and cells able to grow in presence of 2 μM of Vemurafenib emerged after ~2 months of culture. We obtained a cell line A375VR (A375 Vemurafenib Resistant). Notably, these cells displayed larger cell size and elongated morphology (data not shown). As shown in Figure 2D while A375 parental cell line displayed more than 50% of mortality after 120 hours of treatment with Vemurafenib at a 3 μM concentration, A375VR did not lose viability in respect to control cells. To further confirm the establishment of resistance in A375VR, we tested the effect of Vemurafenib at the concentration of 2 μM, a dose at which the resistant cells were commonly cultured, on ERK phosphorylation in resistant cells compared with their parental counterpart. Vemurafenib treatment of parental cells led to inhibition of phospho-ERK1/2 after 24 hours, while resistant cells were not responsive to BRAF inhibitor in terms of phospho-ERK1/2 down-regulation. Additionally, we analysed cell-cycle profiles of parental and resistant cells treated with Vemurafenib at two different doses (0.3 and 3 μM) at 8, 24, and 48 hours ( Figure 2F). Short-term (8 hours) treatment of parental cells with Vemurafenib had limited effects on the cell-cycle profile in A375 cells, whereas prolonged treatment with Vemurafenib (24-48 hours) led to strong reduction of S-phase cells at all doses and the accumulation of sub G1 cells at the dose of 3 μM after 48 hours. On the other hand, resistant cells were capable to overcome Vemurafenib-induced cytostatic effect accompanied by cell death. To investigate BAG3 protein involvement in acquired resistance to Vemurafenib, we assessed the effect www.impactjournals.com/oncotarget on cell apoptosis of the bag3siRNA on A375VR. For this purpose, we treated A375VR cells with a specific siRNA for BAG3 or NT-siRNA while continuously exposed to Vemurafenib 2 μM and analysed the cells for hypo-diploid nuclei contents at 72, 96, and 120 hours. In Figure 2G, A375VR cells displayed a significant re-sensitization to Vemurafenib when treated with a specific bag3siRNA after 72 hours; rate of apoptosis reached 40% after 120 hours of continuous exposure to BRAF-inhibitor and bag3siRNA. Since the maximal apoptotic effect was found Representative images of BAG3 negative (score 0), BAG3 low positive (score 1) and BAG3 high positive (score 2) metastatic melanoma samples stained using a monoclonal anti-BAG3 antibody revealed by using a biotinylated secondary antibody. Sections were counterstained with hematoxylin. (A) mut, mutation; WT, wild type. Fisher exact test was calculated by using 2 × 3 contingency tables. (B, C) A375 extracts were immunoprecipitated with an anti-BAG3 monoclonal antibody and immune complexes were then immunoblotted with antibodies recognizing BRAF, BAG3, Hsp70, or GAPDH as indicated. Immunoprecipitation with mouse IgGs was used as negative control (D) BAG3 down-modulation reduces levels of BRAF protein and affects ERK phosphorylation in A375 cells. A375 cells were transfected twice consecutively with a BAG3-specific or a non-targeting (NT) siRNA (200 nM), with the second transfection time being 72 hrs after the first one. After 120 hrs collected from the first transfection cells were analysed by western blot using anti-BAG3 polyclonal, anti-BRAF, anti-pERK and anti-ERK1 antibodies. Anti-GAPDH antibody was used as loading control. The levels of BAG3, BRAF were quantified by Oncotarget 80397 www.impactjournals.com/oncotarget at 120 hours, this time point was chosen for subsequent experiments. Indeed, we also observed in the same experimental setting a significant increase in appearance of cleaved caspase-3/7 in cells were BAG3 was silenced, as shown in Figure 2H. Those findings indicate that in A375 melanoma cells Vemurafenib Resistant can be sensitized to apoptosis by interfering with BAG3 protein levels. BAG3 down-modulation restores A375VR sensitivity to Vemurafenib acting on ERK pathway but also on parallel survival pathways To identify pathways implicated in acquired resistance to Vemurafenib, we plated A375VR at low density and 5 different clones were selected: A375VR#5, A375VR#6, A375VR#7, A375VR#8, and A375VR#9. Selected clones were continuously kept in culture in presence of 2 µM Vemurafenib. As already reported, the pathway activated by the EGF receptor (EGFR) plays a crucial role in resistance of melanoma cells to Vemurafenib [26]. To further confirm these data, phosphorylated EGFR (pEGFR) and EGRF protein levels were analysed by Western Blotting in selected clones and compared with those in A375 and A375VR. As shown in Figure 3B, A375VR were characterized by increased levels of both pEGFR and EGFR proteins in comparison with those detected in A375. On the other end, each clone was characterized by different levels of pEGFR and EGFR proteins. We have also shown that the phosphorylation of AKT, an EGFR downstream effector, was enhanced in resistant cells and in clones. Our analysis confirmed that phosphorylation of some signal transducers and transcription activators was consistently increased. Such an increase was also evident for STAT3 phosphorylation in resistant cells and clones; on this regard, the proteins belonging to the STAT family of transcription factors, they are activated by cytokines and growth factors receptors and led to cancer progression [27] ( Figure 3A). To further identify mechanisms that were implicated in driving resistance in our experimental model, we tested the expression of BRAF in Vemurafenib resistant sub clones by Western Blotting. As shown in Figure 3A, there was no difference in BRAF protein expression between A375VR cell line and their parental counterpart, but all five A375VR sub clones expressed different levels of BRAF mutated protein. To investigate the mechanisms by which the increased expression levels of pEGFR, EGFR, pAKT, pSTAT3, and BRAF could influence response of Vemurafenib-resistant clones to BAG3 down-modulation, we transfected the five A375VR sub clones with bag3siRNA or NTsiRNA while continuously culturing them with 2 µM Vemurafenib, as previously described for A375VR. After 120 hours from transfection, we evaluated the percentage of cells with hypodiploid nuclei by propidium iodide staining. BAG3 down-modulation was found to be able to re-sensitize the different clones, though the degree of such effect again varied between the different clones. In particular, A375VR#6 was the clone more susceptible to BAG3 down-modulation, reaching a rate of apoptosis of 42.5% after 120 hours from transfection ( Figure 3B). The induction of apoptosis in A375VR#6 subjected to BAG3 silencing was further confirmed by the appearance of cleaved caspase-3, as shown in Figure 3C. BAG3 silencing in A375VR#6 resulted also in the reduction of BRAF levels as demonstrated in the parental cell line (data not shown). A375VR#6 clone was used to deeper investigate the molecular mechanisms through which BAG3 silencing could restore sensitivity to Vemurafenib in melanoma cells with acquired resistance to this inhibitor. We conducted phosphoprotein-array analysis to identify any pathway that may be deregulated in the resistant clone after treatment with a specific agent able to knockdown BAG3 protein expression. Therefore, A375VR#6 cells were treated with bag3siRNA or NTsiRNA as previously described and, after 120 hours cells were collected, lysed, and analysed with human phosphoprotein array. We observed down-modulated phosphorylation of the ERK protein, after BAG3 protein silencing in the resistant clone. Furthermore, we observed that RSK and the transcription factors CREB, and STAT3/2, three downstream target of pERK were downmodulated as well ( Figure 3D-3E). It is of note that, in recent papers it was demonstrated that molecules able to interfere with STAT3 are able to overcome Vemurafenib resistance in melanoma cells [27] and that hyperactivation of CREB confers acquired resistance to to BRAFV600E inhibition in melanoma via upregulation of AEBP1 expression and consequent activation of NF-κB [28]. Targets of the PI3K pathway were also found to be significantly dephosphorylated in cells where BAG3 was silenced such as AKT, mTOR and the p70S6K that also were found to be interesting targets to overcome acquired resistance to to BRAFV600E inhibition in melanoma [29]. Interestingly, also Hck and Lyn of the family of Src Tyrosine Kinases were down-phosphorilated when BAG3 expression is inhibited; those proteins act upstream of ERK kinase and result activated upon aberrant expression and/ or activity of EGFR. Inhibition of EGFR or Src Tyrosine Kinases were also found to be effective in overcome BRAF inhibitor resistance in melanoma cells [30]. We also confirmed that transcription factors CREB and STAT3 phosphorylation was decreased in the A375VR#6 cell line by immunofluorescence upon transfection with a specific bag3siRNA ( Figure 3F). Alltogheter, our data show that BAG3 can lower several survival pathways hyper-activated during the acquisition of the resistance. DISCUSSION The increased knowledge about the molecular mechanisms underlying the pathogenesis of cutaneous melanoma has led to the development of a personalized approach for the treatment of melanoma [31]. Treatment of BRAF-mutant metastatic melanoma with mitogenactivated protein kinase (MAPK) pathway targeted therapies (BRAF/MEK inhibitors) has improved outcomes and revolutionized disease management for patients with advanced stage disease [7]. However, the clinical experience with BRAF inhibitors and in particular with Vemurafenib has also shown that the efficacy of long-term treatment for patients with melanoma is hampered by the development of acquired drug resistance [6,32]. Therefore, it is important to determine whether BRAFV600E-mutant melanoma cells with acquired resistance to Vemurafenib could be re-sensitized to the treatment. BAG3 is an anti-apoptotic protein that has been shown to sustain cell survival in a variety of tumour types, including melanoma [9,12,13,14,15,16]. BAG3 downregulation appears to indeed induce cell apoptosis and impair tumour growth in melanoma -both in vitro and in vivo [17] -suggesting that it may represent a novel target for tumour therapy. It has been indicated that the role of BAG3 in melanoma is due to its anti-apoptotic properties; in fact, this protein has been shown to protect melanoma cells from death through the interaction with apoptosis-regulating proteins, such as the IKKγ subunit of the NF-κB-activating complex IKK [17]. Furthermore, BAG3 expression was reported to be associated with melanoma progression [10]; more recently, an interesting correlation between BAG3 protein expression and prognosis in patients affected by metastatic melanoma with positive lymph nodes has been described [11]. Results shown in this report describe that BAG3 is highly expressed in in the majority of metastatic melanoma carrying the BRAFV600E mutation and, therefore, this protein could be a possible therapy target. Moreover, in an in vitro model of acquired resistance to Vemurafenib, we have demonstrated that BAG3 silencing was able to restore sensitivity to the BRAF inhibitor and that the mechanism responsible for such a re-sensitization seems to be effective on more than one survival pathway responsible of the acquired resistance. We can summarize that BAG3 silencing is able to significantly impact on one of the major feature of BRAF inhibitor resistance that is a persistent ERK phosphorylation, this impact on the downstream target as RSKs, CREB and STAT3. Interestingly, BAG3 down-modulation impacts also on Lyn and Hck Tyrosine Kinases that work upstream in respect to ERK and are found hyper-phosphorylated upon aberrant expression and/or activity of EGFR. It is of note that, other proteins, containing the Src module were found to interact with BAG3 via its proline rich domain [33]. Furthermore, BAG3 is also involved in the PI3K/AKT/ mTOR survival pathway; indeed, BAG3 sustains levels of AKT and its downstream targets as previously reported [34]. Recently, has also been described an inhibitor of Hsp70 able to disrupt BAG3/Hsp70 complex [33] that acts via the BAG domain of the BAG3 protein. We believe that a part of the observations we reported in this paper can be ascribed to the Hsp70-mediated activities of BAG3 protein, such for instance the effects on BRAF and AKT. However, the activity on the other targets may be obtained through the other functional domains of BAG3 such as twice consecutively with a BAG3-specific or a non-targeting (NT) siRNA (200 nM), with the second transfection time being 72 hrs after the first one and they were treated with Vemurafenib (2 µM). After 120 hrs from the first transfection, cells were collected and labelled with propidium iodide and analysed by flow cytometry. The percentage of cells in the sub-diploid apoptotic region was quantified for each condition. Graph depicts mean percentage of Sub G0/G1 cells (± SD). (A) A375 cells were treated with a BAG3-specific or a nontargeting (NT) siRNA (200 nM) as previously described and stained with 5 µM CellEvent ™ Caspase-3/-7 Green detection reagent for 30 min at 37°C and analysed by flow cytometry. Data are presented as the mean ± SD of three independent determinations (B) BAG3 downmodulation reduces levels of BRAF protein and affects ERK phosphorylation in A375 cells. A375 cells were transfected as previously described and after 120h total protein extracts were analysed by western blot using anti-BAG3 polyclonal, anti-BRAF, anti-pERK and anti-ERK1 antibodies. Anti-GAPDH antibody was used as loading control. The levels of BAG3, BRAF were quantified by densitometry and normalized to GAPDH (O.D. BAG3/O.D. GAPDH; O.D. BRAF/O.D. GAPDH) (C) A375 Acquire resistance to Vemurafenib (PLX4032) after long-term drug treatment. We cultured BRAF mutant melanoma cells (A375) in increasing concentration (up to 2 µM) of the B-RAF inhibitor Vemurafenib (PLX4032). After 2 months, we isolated a resistant cell line (A375VR) that was less sensitive to PLX4032 than the parental cell line. Parental and resistant cells were grown in the presence of indicated doses of Vemurafenib for 120 hrs. Relative cell viability was assessed by MTT assay (D) Resistant cells bypass G1/S arrest induced by PLX4032. A375 and A375VR were treated with 2 µM Vemurafenib for 8 and 24 hrs. Then the levels of phosphorylated ERK (pERK) and ERK 1 were analysed by western blot using anti-pERK and anti-ERK antibodies. GAPDH was used as loading control (E) A375 and A375VR were treated with different doses of Vemurafenib for indicated time. Cells were harvested and stained with propidium iodide for cell-cycle analysis (F) Down-regulation of BAG3 re-sensitizes A375VR cell lines to Vemurafenib. A375VR cells were transfected with a BAG3-specific or a NT siRNA (200 nM). After 24 hrs they were treated with 2 µM Vemurafenib and after 72, 96, and 120 hrs cells were collected, labelled with propidium iodide and analysed by flow cytometry. The percentage of cells in the sub-diploid apoptotic region was quantified for each condition. Graph depicts mean percentage of Sub G0/G1 cells (± SD) (G) A375VR cells were treated with a BAG3-specific or a non-targeting (NT) siRNA (200 nM) as previously described and stained with 5 µM CellEvent™ Caspase-3/-7 Green detection reagent for 30 min at 37°C and analysed by flow cytometry. Data are presented as the mean ± SD of three independent determinations (H) *p < 0.05 > 0.01; **p < 0.01 > 0.001; ***p < 0.001. Oncotarget 80401 www.impactjournals.com/oncotarget the WW domain, the proline rich or the IPV motif. Thus, additional studies and the design of molecules that can selective bind to this portions of the protein can represent a valid tool to selectively control and inhibit survival pathways in resistant cells. In conclusion, our data strongly support the idea that BAG3 can represent a valid target in the treatment of BRAF inhibitor resistant metastatic melanomas. Cell cultures and reagents The melanoma cancer cell line A375 was obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured in DMEM (Dulbecco's modified Eagle's medium) supplemented with 10% fetal bovine serum (FBS). The media for culturing the above cell line were purchased from Lonza (Bergamo, Italy) and supplemented with 100 U/mL penicillin and 2 g/mL streptomycin (Sigma-Aldrich Corp). A375 cells were cultured in increasing concentrations of PLX4032 (from 0.1-2 μM) to generate resistant cell line (A375VR). To generate the resistant clones A375VR#5, A375VR#6, A375VR#7, A375VR#8 and A375VR#9 the resistant cell line (A375VR) was plateted at the density of 1 cell/well in a 96 well plate. Resistant lines were maintained in the continuous presence of 2 μM PLX4032, supplemented every 72 hr. Immunohistochemistry Four-µm thick sections of each tissue, mounted on poly-L-lysine-coated glass slides, were analysed by immunohistochemistry (IHC) using the anti-BAG3 mAb AC-1 (BIOUNIVERSA s.r.l., SA, Italy). IHC protocol included deparaffination in bioclear, rehydration through descending degrees of alcohol up to water, incubation with 3% hydrogen peroxidase for 5 minutes to inactivate endogenous peroxidases, non enzymatic antigen retrieval in CC1 buffer (Ventana Medical System), pH 8.0, for 36 minutes at 95°C. After rinsing with phosphate-buffered saline (PBS 1×), samples were blocked with 5% fetal bovine serum in 0.1% PBS/BSA and then incubated for 1 hour at room temperature with the mAb in saturating conditions. The standard streptavidin-biotin linked horseradish peroxidase technique was then performed, and 3,3′-diaminobenzidine was used as a substrate chromogen solution for the development of peroxidase activity. Finally, the sections were counterstained with hematoxylin; slides were then coverslipped using a synthetic mounting medium. For immunohistochemistry, scoring was performed by at least two investigators (in very few borderline cases, classification of BAG3 staining required additional investigators and was based on the consistency of the majority of them). Patient samples Tumour samples were obtained from a consecutively-collected series of unselected patients who underwent surgical resection of metastatic malignant melanoma at both the Local Health Unit 1 (ASL1) and Azienda Ospedaliero Universitaria (AOU), Sassari (Italy). Patients were informed about aims and limits of the study and gave their written informed consent before tissue samples were collected. The study was reviewed and approved by the ethical review boards of both Local Health Unit 1 (Azienda Sanitaria Locale 1; ASL1) and University of Sassari. Western blot Cells were harvested and lysed in a buffer containing 20 mM HEPES (pH 7.5), 150 mM NaCl, 0.1% Triton (TNN buffer) supplemented with a protease inhibitors cocktail (Sigma), and subjected to 3 cycles of freezeand-thawing. Lysates were then centrifuged for 20 min at 15,000 × g and stored at −80°C. Protein amount was determined by Bradford assay (Bio-Rad, Hercules, CA) and 30 µg of total protein were separated on 10% SDS-PAGE gels and electrophoretically transferred to nitrocellulose membrane. Nitrocellulose blots were blocked with 10% nonfat dry milk in TBST buffer (20 mM Tris-HCl at pH 7.4, 500 mM NaCl and 0.01% Tween), and incubated with primary antibodies in TBST containing 5% non-fat dry milk overnight at 4°C. Immunoreactivity was detected by sequential incubation with horseradish peroxidaseconjugated secondary antibodies and ECL detection reagents (Amersham Life Sciences Inc., Arlington Heights, IL, U.S.). Signal detection was performed using ImageQuant ™ LAS 4000 (GE Healthcare, U.S.). Apoptosis Cells were seeded in 24-well plates (1 × 10 4 cells per well) and treated with 2 μM of Vemurafenib and/or transfected with BAG3 small-interfering RNA (siRNA) or NonTargeted siRNA (NTsiRNA). At the end of treatment, the percentage of sub-G0/G1 cells was analysed via propidium iodide incorporation into permeabilized cells, and flow cytometry was performed as previously described [17]. Each experimental point was performed in triplicate, and the data reported are the mean of at least three independent experiments. Error bars depict Standard Deviation (SD). For the quantification of caspase-3/7 activity, cells were labeled with 500 nM CellEvent caspase-3/7 green detection reagent (Life Technologies) for 30 minutes at 37°C. A total of 10,000 stained cells per sample were www.impactjournals.com/oncotarget acquired and analyzed in a FACSVerse flow cytometer by using FACSSuite software (Becton Dickinson). Co-immunoprecipitation For immunoprecipitation of BAG3 protein the anti-BAG3 mAb AC-2 was coupled to Dynabeads (Invitrogen) following the manufacturer's instructions. Briefly, 500 × g of cell extracts were immunoprecipitated at 4°C overnight and then analysed by Western blot using a rabbit anti-BAG3 polyclonal primary antibody, anti-BRAF antibody, anti-HSC70 antibody, and anti-GAPDH antibody. Cell viability and cell-cycle analysis Cells were synchronized by holding at confluence for 2 days, which arrests cells in G0/G1 phase of the cell cycle (DAI et al., 2007). Synchronized cells were plated at a cell density of 5 × 10 3 cells/cm 2 and after 2 h were treated with indicated concentrations of Vemurafenib (see figure legends). Cell viability was measured by MTT ([3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide]) (M2128) assay. Cell-cycle analysis was performed after incubation of cells with PI solution (2.5 mg/ml propidium iodide, 0.75 M sodium citrate pH 8.0, 0.1% Triton). Fluorescence intensity was measured by flow cytometry (FACScan Becton Dickinson, BD, Franklin Lakes, NJ, USA). For each sample, 10,000 events were recorded and histograms of red fluorescence versus counts were generated to evaluate percentages of cells in each phase of the cell cycle. The proportion of cells in each phase was calculated using ModFit LT software (BD). Indirect immunofluorescence Cells were cultured on coverslips in six-well plates to 60-70% confluence and transfected with 200 nM bag3siRNA or NTsiRNA. After 120 hours of transfection, coverslips were washed in PBS 1× and fixed in 3.7% formaldehyde in PBS1X for 30 min at room temperature, and then incubated for 5 min with PBS 1× 0.1 M glycine. After washing, coverslips were permeabilized with 0.1% Triton X-100 for 5 min, washed again and incubated with blocking solution (5% normal goat serum in PBS 1×) for 1h at room temperature. Following incubation with 3 mg/ml of anti-BAG3 mouse monoclonal antibody AC-2 at room temperature for 1 hour a 1:800 dilution of anti-pCREB, 1: 400 of anti-pSTAT3S727 and 1: 100 of anti-beta-actin antibody, coverslips were washed three times with PBS 1×. After incubation with a 1: 500 dilution of goat anti-mouse or anti-rabbit IgG DyLigth 488-conjugated antibodies (Jackson ImmunoResearch, West Grove, PA, USA) and a 1: 500 dilution of goat anti-mouse IgG DyLigth 649-conjugated antibodies (Jackson ImmunoResearch) at room temperature for 45 min, coverslips were again washed for three times in PBS and then in distilled water. The coverslips were then mounted on a slide with interspaces containing 47% (v/v) glycerol. Samples were analysed using a confocal laser scanning microscope (Leica SP5, Leica Microsystems, Wetzlar, Germany). Images were acquired in sequential scan mode by using the same acquisitions parameters (laser intensities, gain photomultipliers, pinhole aperture, objective 63×, zoom 1.1) when comparing experimental and control material. For production of figures, brightness and contrast of images were adjusted by taking care to leave a light cellular fluorescence background for visual appreciation of the lowest fluorescence intensity features and to help comparison among the different experimental groups. Final figures were assembled using Adobe Photoshop 7 and Adobe Illustrator 10 (Adobe systems incorporated, San Jose, CA, USA). Leica Confocal Software and ImageJ (Leica Microsystems, Wetzlar, Germany) were used for data analysis.
2018-01-24T17:25:33.048Z
2017-06-30T00:00:00.000
{ "year": 2017, "sha1": "52c73aa569d6c99b7a2bc84dd3cc8c3f29fa4b96", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=18902&path[]=60648", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52c73aa569d6c99b7a2bc84dd3cc8c3f29fa4b96", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255439490
pes2o/s2orc
v3-fos-license
Pyroptosis Provides New Strategies for the Treatment of Cancer Cancer is an important cause of death worldwide. The main types of cancer treatment are still surgery, chemotherapy and radiotherapy, and immunotherapy is becoming an important cancer treatment. Pyroptosis is a type of programmed cell death that accompanies an inflammatory response. This paper reviews the recent research progress in pyroptosis in tumors. Pyroptosis has been observed since 1986 and until recently has been recognized as programmed cell death mediated by GSDM family proteins. The molecular pathway of pyroptosis depends on the inflammasome-mediated caspase-1/GSDMD pathway, which is the canonical pathway, and the caspase-4/5/11/GSDMD pathway, which is the noncanonical pathway. Other pathways include caspase3/GSDME. Pyroptosis is a double-edged sword that is closely related to the tumor immune microenvironment. On the one hand, pyroptosis produces a chronic inflammatory environment, promotes the transition of normal cells to tumor cells, helps tumor cells achieve immune escape, and promotes tumor growth and metastasis. On the other hand, some tumor cell treatments can induce pyroptosis, which is a nonapoptotic form of cell death. Additionally, pyroptosis releases inflammatory molecules that promote lymphocyte recruitment and enhance the immune system's ability to kill tumor cells. With the advent of immunotherapy, pyroptosis has been shown to enhance the antitumor efficacy of immune checkpoint inhibitors. Some antineoplastic agents, such as chemotherapeutic agents, can also exert antineoplastic effects through the pyroptosis pathway. Pyroptosis, which is a programmed cell death mode, has been the focus of research in recent years, and the relationship between pyroptosis, tumors and tumor immunity has attracted attention, but there are still some questions to be answered regarding the specific mechanism. Further study of pyroptosis would aid in developing new antitumor therapies and has great clinical prospects. Introduction Cancer is an important cause of death worldwide and has gradually become a major global public health problem [1]. Currently, surgery, radiotherapy and chemotherapy are the main forms of cancer treatment. The rise in immunotherapy in recent years offers new options for cancer treatment [2]. Many of the ideas behind cancer treatment lie in inducing tumor cell death. Cell death is a complex and important regulatory network in which the immune system is also involved. Cell death can be divided into programmed death and nonprogrammed death. Programmed death includes apoptosis, necrotizing apoptosis, autophagy, and ferroptosis [3]. Pyroptosis is a type of lytic inflammatory programmed cell death characterized by swelling and dissolution of cells and is accompanied by the release of various proinflammatory factors. As a form of programmed death, pyroptosis occurs more quickly and with a stronger inflammatory response than other forms of death. Recent studies have shown that Ivyspring International Publisher pyroptosis is closely related to both tumors and immunity [4]. The pyroptosis pathway is involved in both the innate and adaptive immune systems. The core of the pyroptosis pathway is the gasdermin (GSDM) protein family, which can be cleaved by caspases and granzymes to form active fragments. When activated, these proteins can cause membrane perforation, cell swelling and rupture, accompanied by the release of a large number of inflammatory factors, which then affect downstream pathways. Researchers have shown that pyroptosis may play dual roles in tumor activity. On the one hand, when normal cells are stimulated and pyroptosis occurs, inflammatory factors are released, leading to the formation of an inflammatory microenvironment, which can promote the transformation of normal cells into cancer cells. On the other hand, appropriate levels of pyroptosis help to maintain the stability of the extracellular environment, improve immune activity, and remove damage and pathogens to protect the host. Inducing pyroptosis in cancer cells may become a new therapeutic strategy to inhibit the development of cancer [5]. It is well known that the transformation, growth, invasion and metastasis of human cancer cells, as well as the response to treatment, are regulated by molecular signals. Cell death plays an indispensable role in the biological process of maintaining the normal homeostasis of the body and the rapid proliferation of tumor cells [6,7]. Cell death includes regulatory cell death (RCD) and accidental cell death (ACD). Common RCDs include apoptosis, necrotic apoptosis, ferroptosis, autophagy and pyroptosis. Programmed death is an internal death mechanism controlled by these molecular signals. Ferroptosis is an iron ion dependent RCD, mainly caused by lipid peroxidation. Necrotic apoptosis is mainly mediated by cytokines (TNF-a, IFN-a and IFN-g), Toll like receptors (TLR3, TLR4 and TLR9) and nucleic acid (DNA and RNA) receptors. MLKL is the key molecule of necrotizing apoptosis [8,9]. For decades, apoptosis has been the main mode of cell death studied by people, but the failure to induce apoptosis is the main reason for failure of cancer treatment [10]. Therefore, pyroptosis, as a nonapoptotic mechanism, may make up for the shortcomings of apoptosis in cancer treatment. At present, pyroptosis has become a hot research topic in the field of cancer. Timeline of the pyroptosis study In 1986, Friedlander and colleagues found that anthrax lethal toxin induced rapid lysis and death in mouse macrophages, as well as the release of intracellular substances [11]. In 1992, Zychlinsky and colleagues discovered the morphological characteristics of pyroptosis and its difference from apoptosis through shigella-induced macrophage infection, and this process was considered as programmed cell death mediated by Caspase-1, at that time the concept of pyroptosis did not exist [12]. In 1999, D. Hersh and colleagues found that caspase-1 knockout blocked Shigella-induced cell death [13]. In 2001, Cookson BT et al. defined programmed cell death with an inflammatory response as pyroptosis [14]. In 2005, pyroptosis was redefined by Fink SL et al as programmed cell death mediated by caspase-1 in which cells undergo nuclear contraction, DNA rupture, swelling and the release inflammatory factors [15]. In 2015, gasdermin D (GSDMD) was shown to be a key protein in the pyroptosis pathway, which can be cleaved by caspase-1/4/5/11 to exert an effect [16,17]. In 2016, Liu S reported that the GSDMD protein is usually in a state of self-inhibition. After being cleaved by the aforementioned caspases in a specific position, GSDMD-N domain could lead to cell membrane perforation and thus induce pyroptosis [18,19]. In 2017, Shi J et al redefined pyroptosis as programmed cell death mediated by GSDM family proteins [20]. The GSDM protein family includes GSDMA, GSDMB, GSDMC, and GSDME/DFNA5, which also have membrane perforation activity. The Nomenclature Committee on Cell Death (NCCD) defined pyroptosis as regulatory cell death (RCD) that relies on perforation of the plasma membrane and is caused by members of the GSDM protein family, often (but not always) as a result of inflammatory caspase activation in 2018 [21]. Signaling pathways of pyroptosis In the past few years, many studies on the pyroptosis pathway have been conducted. Pyroptosis is generally considered to be inflammatory caspase-induced cell death. The GSDM protein family plays a decisive role in pyroptosis. These proteins consist of two different N-terminal and C-terminal domains linked by a flexible junction region. In the absence of activated cleavage, the binding of the C-terminal to the N-terminal can inhibit the activity of the N-terminal [19]. When the GSDM protein is cleaved, the released GSDM-N terminal forms oligomers, which can lead to pyroptosis through plasma membrane perforation [22]. GSDMD-related pathways are the most researched and are classified into canonical and noncanonical pathways. The caspase-1-dependent pyroptosis pathway is the canonical pathway, while the bacterial toxin lipopolysaccharide (LPS) activates the human caspase-4/5 or mouse caspase-11 pathway, which is the noncanonical pathway [23]. All of these pathways can activate certain caspases to cleave GSDMD, thus releasing the N-terminal domain of GSDMD and causing pyroptosis [24]. Recent studies have shown that caspases that perform apoptosis as well as granzymes can also induce pyroptosis by cleaving GSDM proteins [25]. The canonical pathway The signal transduction of pyroptotic inflammasome activation relies on pattern recognition receptors (PRRs) that recognize pathogen-associated molecular patterns (PAMPs) and nonpathogenrelated damage-associated molecular patterns (DAMPs). Toll-like receptor (TLR), intracellular nucleotide binding oligomeric domain (NOD)-like receptor (NLR) and AIM2-like receptor (ALR) are all pyroptosis-related PRRs. PRRs are receptors for danger signals and can be activated by many factors, including viruses, fungi, bacterial toxins, parasites, nucleic acids, crystalline substrates, certain drugs, silica, reactive oxygen species (ROS), and endogenous damage signals [26][27][28][29]. NLRP3, which is the most common PRR, is activated through two steps: activation of K + /Ca 2+ outflow and mitochondrial-and lysosomal-related damage. After these factors recognize relevant PAMPs and DAMPs, caspase-1 is activated [30][31][32][33]. The inflammasome is assembled and recruits ASC adapters, and the NLR or AIM2 signaling domain connects to ASC [34]. This binding in turn recruits caspase-1, leading to caspase-1 activation, which cleaves and activates pro-IL-8 and IL-1, which are released into the extracellular space to trigger an inflammatory response [35]. Some researchers have also shown that NLRC4 can directly bind caspase-1 in the absence of ASC [36]. GSDMD can be specifically cleaved by caspase-1 to play a role in downstream plasma membrane perforation [16]. In general, inflammation-mediated pyroptosis has been classified as the canonical caspase-1-dependent inflammatory pathway [37]. The noncanonical pathway The noncanonical pathway differs from the canonical pathway in that it does not require an inflammasome to activate caspase-1. Gram-negative LPS can directly activate caspase-4/5 in humans or caspase-11 in mice. Activated caspase-4/5/11 can cleave GSDMD to produce the N-terminal domain, which perforates the plasma membrane and induces pyroptosis [38]. In addition, LPS-activated caspase-11 opens pannexin-1 (a nonselective large protein channel) [39] and allows K + efflux, which activates the NLRP3 inflammasome and induces caspase-1 activation and the canonical pyroptosis pathway [40,41]. This process promotes the activation and release of IL-1β and IL-18, whose activation is caused by caspase-1 but not caspase-4/5/11. The formation of pores is not confined to the cell membrane, other membranes within the cell, such as mitochondrial membranes, can also form pores [42]. During inflammatory lung injury, the LPS-mediated caspase-11/GSDMD pathway induces mitochondrial pores, resulting in the release of mitochondrial DNA into endothelial cells, which triggers downstream molecular pathways. Other pyroptosis pathways Studies have shown that caspase-3/6/8, which is associated with apoptosis, can also induce pyroptosis. Caspase-3 can induce GSDME-related pyroptosis under certain conditions, such as in the presence of TNF-α, high expression of GSDME or certain chemotherapeutic agents [43]. Zheng et al. found that GSDME is a switch between apoptosis and pyroptosis induced by chemotherapy drugs. Pyroptosis occurs when GSDME is highly expressed, and apoptosis occurs when GSDME is expressed at low levels in the presence of chemotherapy drugs [44]. In addition, other scholars have shown that GSDME can be cleaved by granase B (GZMB) to induce pyroptosis [43]. Yersinia YopJ protein can cause the cleavage of GSDMD through caspase-8, thus inducing pyroptosis [45,[46][47][48]. Additional studies have shown that spase-6 activates the NLRP3 inflammasome by enhancing the interaction between serine/tryptophan protein kinase 3 and Z-DNA binding protein 1, which in turn activates caspase-1-mediated pyroptosis [44]. In addition to GSDMD and GSDME, GSDMA/B/C also plays an important role in membrane perforation and pyroptosis [19,49,50]. It has been found that granase A in cytotoxic lymphocytes can cleave GSDMB and cause pyroptosis. The GSDMB-NT lacks a specific connection area and cannot be cleaved by caspase-1/4/5/11 but can be recognized and cleaved by caspase-3/6/7 [51]. Therefore, the role of GSDMB in pyroptosis is still controversial. GSDMA-NT, GSDMD-NT and GSDME-NT showed similar poreforming activities [19]. However, the exact mechanism has not yet been reported. GSDMC has been shown to play a role in pyroptosis. GSDMC was first shown to be highly expressed in metastatic melanoma and is also known as melano-derived leucine zippcontaining extranuclear factor (MLZE) [50]. GSDMC can be cleaved by caspase-8. The presence of programmed death ligand 1 (PD-L1), macrophagederived TNF-α, antibiotics, or chemotherapy can induce pyroptosis by the caspase-8/GSDMC pathway, and TNF-α can induce pyroptosis through GSDMC in MDA-MB-231 breast cancer cells [48]. Breast cancer The expression of caspase-1,GSDMD is negatively correlated with tumor malignancy and risk of death [57] Breast cancer DHA inhibits cancer via caspase-1/GSDMD pathway [58] Breast cancer TMAO combined with PD-1 inhibitor increase antitumor function by inducing GSDME mediated pathway [97] Esophageal cancer LPS induced noncanonical pathway of pyroptosis [62] Gastric cancer Reduction of GSDME promotes cancer progression [17] Gastric cancer Chemotherapy induce pyroptosis via caspase3/GSMDE pathway [45,63] Lung cancer Cisplatin and paclitaxel induce pyroptosis via caspase3/GSDMA pathway [85] Cervical cancer Lobaplatin induces pyroptosis via caspase3/GSDME pathway Pyroptosis observed in various tumors In HCC, researchers have shown that NLRP3 and ASC expression was significantly downregulated, which was negatively correlated with the pathological grade and clinical stage of HCC [52]. The expression of caspase-1 was significantly decreased in HCC tissues, and caspase-1, IL-1β and IL-18 in HCC tissues were lower than those in paracancerous tissues. Euxanthone inhibits the development of HCC by inducing pyroptosis [53,54]. In HCC tissues, the expression of DFNA5/GSDME was lower than that in normal tissues, the expression of DFNA5/GSDME was upregulated, and cell proliferation was inhibited [55]. Pancreatic ductal adenocarcinoma (PDAC) is a highly malignant tumor, and the therapeutic effect is still not ideal. The expression of STE20-like kinase 1 (MST1) in PDAC is decreased. Restoring MST1 expression can lead to increased PDAC cell death, and caspase-1-mediated pyroptosis can inhibit proliferation, invasion and metastasis through ROS induction [56]. In breast cancer, Wu et al. showed that the expression levels of caspase-1, IL-1 and GSDMD were negatively correlated with tumor grade, size, stage, and risk of death [57]. Docosahexaenoic acid (DHA) inhibits breast cancer, and when it is added to the breast cancer cell line MDA-MB-231, caspase-1 and GSDMD activities are enhanced, and pyroptosis occurs, which manifests as increased IL-1β secretion and membrane perforation [58]. In the development of esophageal cancer, alcohol consumption has been shown to inhibit caspase-1 and promote IL-18 and IL-1β and has been associated with pyroptosis [59]. Alcohol consumption exacerbates the course of esophagitis through pyroptosis. Gastroesophageal reflux disease (GERD) increases the risk of Barret's esophagus and esophageal cancer due to long-term exposure of the esophageal epithelium, to damage to the esophageal mucosa and chronic inflammation stimulated by alcohol consumption [60,61]. In addition, LPS can also play a role in the occurrence of esophageal cancer by inducing pyroptosis via the noncanonial pathway [62]. In gastric cancer, studies have shown that a reduction in GSDME expression promotes the progression of gastric cancer [17]. During chemotherapy, GSDME can be activated by caspase-3 to induce pyroptosis in gastric cancer cells [45,63]. In addition, pyroptosis plays a role in the occurrence of tumors and the inhibitory effect of drugs on tumors in many other cancers, such as lung cancer [64], glioma [16,35], and ovarian cancer [65]. Pyroptosis and tumor immunity interact Pyroptosis occurs in normal cells, which may change the microenvironment. The inflammatory microenvironment can help accelerate the immune escape of tumors and have a protumor effect [66]. Long-term tissue or cell exposure to an inflammatory environment can increase the risk of cancer. We have found that cancer cells can adapt to the immune microenvironment, undergo immune escape from immune attack, and adjust the progression of the primary tumor and metastasis. Pyroptosis provides a chronic inflammatory environment for tumorigenesis to promote tumors through inflammasomes, support-ing the tumor microenvironment and the production of inflammatory cytokines [67]. Inflammasomes regulate the tumor immune microenvironment and cell death, and the intestinal microbiota plays an important role in tumorigenesis and metastasis [68]. Inflammasomes can be clinically used as prognostic markers for cancer patients [33,68]. In addition, some treatments can stimulate the immune system and induce pyroptosis in tumor cells [69]. The role of pyroptosis in tumorigenesis and metastasis is related to many factors, including the activation of protooncogenes, the inactivation of tumor suppressor genes, changes in the immune microenvironment, oxidative stress and chronic inflammation. Activation of the pyroptosis pathway results in the release of inflammatory mediators such as IL-1 and IL-18 into the microenvironment, which can promote cancer transformation in tissue. Researchers showed that mice lacking active inflammasomes were more likely to successfully develop colitis-associated colon cancer than wild-type mice [70]. These results suggest that pyroptosis may play different roles in promoting and inhibiting the growth of different tumor cells. The specific mechanism of pyroptosis and its relationship with tumors still need further study. The occurrence and development of tumors cannot be separated from the immune escape of tumor cells, and the reactivation and maintenance of the immune response to tumors can play a role in the control and elimination of tumors. Cancer cells have immunogenicity, and targeting the immunogenicity of cancer cells is a common tumor immunotherapy [71,72]. Because inflammation, especially persistent chronic inflammation, plays an important role in the development, progression, angiogenesis and metastasis of cancer [73], many tumors are secondary to chronic inflammation, which is mediated by M1 macrophages, natural killer cells, and CD8+ T cells [74]. Tumor cells can also recruit specific subsets of immune cells to participate in tumor suppression, including myeloid suppressor cells (MDSCs), M2 macrophages, and regulatory T cells [75,76]. However, in tumors expressing GSDME, pyroptosis produces DAMPs that can recruit immune cells to the tumor microenvironment, and GSDME expression greatly increases the number of tumor-infiltrating lymphocytes (TILs) and macrophage phagocytosis [77]. In the absence of GSDME, caspase-3 activation can lead to apoptosis, which is characterized by cell contraction and plasma membrane blistering. Apoptotic cells are cleared by neighboring phagocytes before they lose integrity and are necrotic cells with low immunogenicity. The activation of caspase-3 when GSDME is highly expressed can induce pyroptosis. Cell swelling and rapid rupture of the plasma membrane indirectly increase the recruitment of immune cells and their role in tumor suppression by promoting an inflammatory and potentially immunogenic tumor environment. Due to pyroptosis, tumor immunity is promoted and tumor growth is inhibited. Although GSDME is expressed in only a small number of tumor cells, a small number of tumor cells undergoing pyroptosis are sufficient to modulate the tumor immune microenvironment and activate a powerful T-cell-mediated antitumor immune response [78]. In immunodeficient mice, tumor suppression by GSDME disappeared due to a lack of NK cells and CD8+ killer T cells, suggesting that this inhibition was dependent on these two immune cell types [43]. Therefore, research and development of antitumor drugs can be guided by the idea of tumor immunity and the tumor microenvironment, and the expression of GSDM family proteins can become a potential marker of tumor immunotherapy. Pyroptosis is a double-edged sword that plays an important role in tumorigenesis and antitumor immunity at all stages of tumor development. Whether it promotes or inhibits tumors depends on the tumor type, host inflammatory state, immunity, and related effector molecules. When tumor cells undergo pyroptosis, the inflammatory factors IL-1β and IL-18 are released, and these inflammatory factors can promote and fight tumors [76]. Studies have shown that the expression of IL-1β in tumors is higher than that in normal tissues, and this factor can promote the growth, invasion and metastasis of tumor cells, which is negatively correlated with prognosis [79]. The expression of IL-18 in tumors is also increased and negatively correlated with prognosis. High expression of IL-18 also promotes the biological behavior of tumors [80][81][82]. However, IL-18 has dual effects. This factor also regulates the immune response and inhibits tumor progression through the recruitment of NK cells, T cells and monocytes [82,83]. The tumor-promoting and tumor-suppressive effects of chronic inflammation induced by pyroptosis are similar mechanistically, so the timing, level and composition of pyroptosis induction need to be closely controlled. Clinical anti-tumor strategy and pyroptosis At present, the clinical anti-tumor treatment is still mainly surgery, radiotherapy and chemotherapy, as well as the recently emerging immunocheckpoint inhibitors, targeted therapy, etc. Recent studies have also found that some of the anti-tumor treatment methods used in clinical practice also play an anti-tumor role by inducing pyroptosis and changing the tumor microenvironment to promote tumor immunity. The following content mainly systematically combs the interaction between clinical anti-tumor strategies and pyroptosis. Surgery has always been an important means of solid tumor treatment. At present, it is still the main treatment for solid tumors. Although the effect of surgery on tumor cell pyroptosis is rarely mentioned by scholars, recent studies have found the relationship between surgery and tumor immunity. Immunosuppression after surgery is reported by scholars [84]. After tissue damage, DAMP is released, causing cell pyroptosis and inflammatory environment, and then recruiting immunosuppressive cells, such as MDSC, M2 macrophages, etc. Therefore, the impact of surgery on human body may be unfavorable from the perspective of anti-tumor immunity. Eliminating the activation of the pyroptosis pathway caused by surgery can reduce the immunosuppression caused thereby. A variety of chemotherapeutic drugs have been proved to be able to induce tumor cell pyroptosis, including cisplatin, paclitaxel, 5-FU, lobaplatin, etc. Chemotherapy induced pyroptosis is often caused by activation of GSDME pathway. In the lung cancer cell line A549, researchers showed that both cisplatin and paclitaxel could induce pyroptosis in tumor cells through the caspase-3/GSDME pathway, and cisplatin acts stronger than paclitaxel [85]. High levels of GSDME can transform apoptosis into pyroptosis. Some studies have shown that the chemotherapy drug lobaplatin can induce pyroptosis in cervical cancer cells through GSDME [86]. This effect is achieved through activation of caspase-3/9 by the ROS/JNK/BAX mitochondrial apoptosis pathway. Similarly, lobaplatin can also induce pyroptosis in colorectal cancer cells through this pathway [87]. In gastric cancer, 5-FU induces pyroptosis in gastric cancer cells through GSDME rather than GSDMD [63]. In GSDME +/+ mice, cisplatin or 5-FU can cause severe intestinal injury and immune cell infiltration, while GSDME -/mice have fewer signs of injury. Moreover, GSDME knockout can also reduce lung injury in response to cisplatin or bleomycin in mice [88]. These effects suggest that GSDME-induced pyroptosis is associated with chemotherapy side effects. These results also suggest that inducing pyroptosis in tumor cells is a possible alternative strategy for tumor therapy. Many drugs in addition to chemotherapy drugs exert antitumor effects through GSDME-mediated tumor cell pyroptosis [45]. However, researchers showed that only approximately 1 in 10 human tumor cells have high levels of GSDME compared with 3 in 5 primary cells. After chemotherapy, tumor cells with high levels of GSDME can undergo pyroptosis, and GSDME-mediated pyroptosis is associated with toxicity and side effects caused by chemotherapy drugs [45]. Some studies have shown that GSDME is epigenetically silenced in several cancers, such as gastric cancer, colorectal cancer and breast cancer, and is considered a tumor suppressor gene. This gene may be epigenetically inactivated through methylation, and its promoter is hypermethylated in several cancers, demonstrating that the main form of gene silencing is hypermethylation [89,90]. GSDME methylation is considered a promising biomarker for cancer detection. In addition, endogenous GSDME expression is closely related to the response to chemotherapy or immunotherapy. It has been reported that low GSDME expression can impair antitumor efficacy, while increasing GSDME expression levels can improve the efficacy of antitumor therapy [45,85,91]. Many cancer patients receive radiotherapy, which destroys tumor cells through high-energy radiation. Cao, W found that ionizing radiation can trigger tumor immunity by inducing GSDME mediated pyroptosis in tumors [92]. It was found that the fragmentation of GSDME occurred in a dose-dependent and event-dependent manner, and all kinds of irradiation could induce cell death. In addition, cytotoxic T cells and cytokine release appear after pyroptosis. Radiation causes the death of immunogenic cells and promotes anti-tumor immunity. Radiotherapy can directly destroy DNA and kill cancer cells. After recognizing DNA fragments through AIM2 receptor, the inflammasome will be activated and induce pyroptosis, and the release of inflammatory mediators will increase tumor infiltrating immune cells, thus making the "cold" tumor "hot" in immunology [93]. During the process of tumor development, immune escape occurs. Restoring the visibility of tumor antigens to the immune system is essential to inhibiting immune escape and increasing tumor immune activation. Immune checkpoint inhibitors promote a targeted immune response to neoplastic antigens. Pyroptosis may play an important role in this treatment. Programmed death-1 (PD-1) and programmed death ligand-1 (PDL-1) are immune checkpoint regulators that are targets of widely used immune checkpoint inhibitors [94]. Gradually, researchers found a relationship between pyroptosis and these factors. Currently, the PD-1 and PDL-1 pathways are found to be important in cancer immunosuppression [95]. Mien-chie Hung et al first reported that PD-L1 interacts with P-Y705-STAT3 and then induces the nuclear translocation of PD-L1 under hypoxia. The function of nuclear PD-L1 at the transcriptional level contributes to the expression of GSDMC. Then the apoptosis induced by TNFα was transformed into pyroptosis [48]. Clinical trials have shown that PD-L1 inhibitors combined with chemotherapy or radiation can kill tumor cells via pyroptosis, and there is improved survival compared to patients treated with PDL-1 inhibitors alone [96]. Combination therapy increases the sensitivity of breast cancer cells to PD-1/PDL-1 inhibitors due to inflammation caused by pyroptosis in the tumor immune environment. In the presence of GSDME, caspase-3 activation leads to pyroptosis, and the formation of an inflammatory environment can increase the recruitment of immune cells [43]. Trimethylamine N-oxide (TMAO) can induce GSDME-mediated pyroptosis in breast cancer cells, and TMAO in combination with PD-1 can increase the antitumor activity of PD-1 alone. Researchers found that TMAO is more abundant in tumors with an activated immune microenvironment and that TMAO can enhance CD8+ T-cell-mediated antitumor immunity by inducing pyroptosis of tumor cells through activation of ER stress kinase PERK [97]. If PD-1 or PDL-1 inhibitor therapy is the initial tumor treatment, inflammation caused by pyroptosis further potentiates the effects. These studies provide a theoretical basis for the combination of PD-1/PDL1 inhibitors and other antitumor therapies. The inflammatory state of the tumor microenvironment has an impact on the response to immune checkpoint inhibitor therapy, which changes the tumor microenvironment and the role of lymphocytes in the tumor by triggering pyroptosis and transforming tumors more sensitive immunologically. Thermalization of the immune response is a complex process that directly regulates the innate immune response by releasing DAMPs and inflammatory factors, enhancing the recruitment of adaptive immune cells, and increasing antigen presentation and TLR activation, thereby amplifying the immune response. Chimeric antigen receptor T (CAR-T) cells have been used to treat hematological malignancies and have achieved good results. However, cytokine release syndrome (CRS) is a serious side effect of this technology. The release of granase B by CAR-T cells may lead to pyroptosis by activating the caspase-3/ GSDME pathway [42], and GSDME knockout eliminates CRS. In addition, the number of perforin/ granzyme B in CAR-T cells, rather than existing CD8+T cells, will induce GSDME mediated target cell pyroptosis [42]. Recent studies have also shown that the synergetic effect of TNF and INF-γ will form a positive feedback loop between inflammatory cell death and cytokine release, and then drive CRS [98]. These results indicate the multiple roles of pyroptosis in tumor immunotherapy, which has corresponding clinical significance. In addition to immune checkpoint inhibitors and adoptive T-cell therapies, immunogenic cell death has received increasing attention in the field of tumor immunity [99]. Pyroptotic cells can also exert their immunogenic effects [100], and IL-1β and IL-18 released by pyroptotic cells, as well as various DAMPs, can recruit immune cells such as dendritic cells or macrophages to engulf pyroptotic cells. Mature dendritic cells present their antigens to tumor-specific toxic T cells to kill tumors [101]. Wangmeng and colleagues found that TBD-3C, a membrane targeted photosensitizer with aggregation induced emission (AIE) characteristics, triggered cell charring through photodynamic therapy (PTD) to cause cancer immunotherapy. TBD-3C induced cell pyroptosis can stimulate M1 polarization of macrophages, lead to dendritic cell (DC) maturation, and activate CD8+cytotoxic T lymphocytes (CTL). It can not only inhibit the growth of primary pancreatic cancer, but also attack distant tumors [102]. Nanodrug is a new tumor treatment method combining traditional drugs and nanotechnology. It is a carrier of controlled release chemotherapy drugs, which can directly deliver chemotherapy drugs to targeted cancer cells, reduce their accumulation in normal cells and tissues, and reduce the side effects of chemotherapy. Nanodrugs can play an anti-tumor role by inducing cell pyroptosis [103,104]. Researchers have shown that treating A549 cells with zinc oxide nanoparticles (Zn-ONPs) induced IL-1β release and caspase-1 activation and increased LDH release, indicating pyroptosis in A549 cells [105]. LipoDDP is a tumor targeting nano liposome carrying cisplatin. DAC is a DNA methyltransferase (DNMT) inhibitor, which can inhibit the methylation of GSDME in tumor cells. LipoDDP combined with DAC can utilize liposomes and activate caspase-3 mediated cell death, and can induce immune response, thereby inhibiting the proliferation and metastasis of tumor cells [106]. The combination of photodynamic therapy and nanomedicine as biomimetic nanoparticles can induce both cell pyroptosis and systemic anti-tumor immunity [104,107,108]. Small molecule targeted drugs are one of the rapidly developing methods to treat cancer in recent years. Some targeted drugs have been found to induce tumor cell pyroptosis. Val-boroPro induces pyroptosis in primary acute myeloid leukemia (AML) cells by activating the inflammasome sensor protein CARD8, which in turn activates procaspase-1 [109]. A study in melanoma confirmed that the combination of BRAFi and MEKi could have an antitumor effect through GSDME induced pyroptosis [77], DDP combined with BI2536 (PLK1 kinase inhibitor) can cause esophageal cancer cell pyroptosis [110]. Among the discovered phenomenon of tumor cell pyroptosis induced by targeted drugs, caspase3/GSDME pathway is the most common pathway of targeted drug induced pyroptosis. The anti-tumor drugs and therapies mentioned above, including new treatment methods such as nano drugs and photodynamic therapy, can induce pyroptosis to play an anti-tumor effect. Other natural compounds and some canonicalal drugs can also promote pyroptosis of tumor cells. Dobrin et al. stimulated triple-negative breast cancer cells with ivermectin, and the pannexin-1 pathway was activated, inducing P2X4/P2X7 receptor overexpression, ATP release, and ultimately pyroptosis [111]. Many other drugs, such as metformin, anthocyanin, and DHA, can induce GSDMD mediated pyroptosis in various cancers [58,112,113]. Pyroptosis widely occur within the tumor cells, a variety of antitumor method, including surgery, radiotherapy, chemotherapy and immune checkpoint inhibitor gradually developed in recent years, small molecular targeted drugs, photodynamic therapy and drugs, as well as some natural compounds, such as traditional Chinese medicine preparation can induce tumor cells pyroptosis and perform antitumor activities. Conclusion The research on pyroptosis has made great progress in the past few years. In this review, we focus on the molecular mechanism of pyroptosis. Pyroptosis is a new type of programmed cell death that is accompanied by inflammation and has been shown to be associated with many diseases, including tumors and atherosclerosis. The relationship between pyroptosis and tumors, pyroptosis and immunity, and its clinical application has become a hot topic. It is of great significance to reveal the role of pyroptosis in many diseases, especially tumors. In particular, in the field of the relationship between pyroptosis and tumors, especially tumor immunity, there are still many questions to be answered about the specific mechanis. Pyroptosis plays different roles in different types of tumors and plays both protumoral and antitumoral roles. Multiple mechanisms have been known to regulate pyroptosis. Various stimuli, to the formation of inflammatory complexes, and to pyroptosis signal transduction, have extremely complex regulatory mechanisms. These regulatory mechanisms not only affect the occurrence of pyroptosis, but also affect the immune response in the extracellular environment. Further study of the relationship between tumors and pyroptosis would be helpful in developing new antitumor therapies, as well as prompt new ideas for the fight against the difficult process of cancers and has great clinical prospects.
2023-01-06T05:06:05.912Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "46c47794e341166d0d76c6e2b0426ca186f6325e", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v14p0140.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46c47794e341166d0d76c6e2b0426ca186f6325e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7352629
pes2o/s2orc
v3-fos-license
Imaging of Nitric Oxide in Nitrergic Neuromuscular Neurotransmission in the Gut Background Numerous functional studies have shown that nitrergic neurotransmission plays a central role in peristalsis and sphincter relaxation throughout the gut and impaired nitrergic neurotransmission has been implicated in clinical disorders of all parts of the gut. However, the role of nitric oxide (NO) as a neurotransmitter continues to be controversial because: 1) the cellular site of production during neurotransmission is not well established; 2) NO may interacts with other inhibitory neurotransmitter candidates, making it difficult to understand its precise role. Methodology/Principal Findings Imaging NO can help resolve many of the controversies regarding the role of NO in nitrergic neurotransmission. Imaging of NO and its cellular site of production is now possible. NO forms quantifiable fluorescent compound with diaminofluorescein (DAF) and allows imaging of NO with good specificity and sensitivity in living cells. In this report we describe visualization and regulation of NO and calcium (Ca2+) in the myenteric nerve varicosities during neurotransmission using multiphoton microscopy. Our results in mice gastric muscle strips provide visual proof that NO is produced de novo in the nitrergic nerve varicosities upon nonadrenergic noncholinergic (NANC) nerve stimulation. These studies show that NO is a neurotransmitter rather than a mediator. Changes in NO production in response to various pharmacological treatments correlated well with changes in slow inhibitory junction potential of smooth muscles. Conclusions/Significance Dual imaging and electrophysiologic studies provide visual proof that during nitrergic neurotransmission NO is produced in the nerve terminals. Such studies may help define whether NO production or its signaling pathway is responsible for impaired nitrergic neurotransmission in pathological states. Introduction Nitric oxide (NO) has been proposed as a neuromuscular neurotransmitter of nonadrenergic noncholinergic (NANC) inhibitory nerves in the parasympathetic [1] and the enteric nervous systems [2]. Clinical importance of this signaling pathway is evidenced by the fact that animal models of impaired nitrergic neurotransmission reveal phenotypes resembling major human gastrointestinal motility disorders [3][4][5][6][7]. However, because of the unusual characteristics of NO, its status as a neurotransmitter or its regulation remains unsettled. While there is strong physiological evidence that NO is involved in inhibitory neurotransmission [8][9], its role as a true neurotransmitter has been questioned. It has been argued that NO may a mediator of another neurotransmitter such as VIP [10][11]. This later view is supported by biochemical studies in the isolated postjunctional smooth muscle cells showing that VIP generates NO in the smooth muscles [12]. However, in a critical review of the available evidence, Van Geldre and Lefebvre [11] concluded that VIP-generated NO in the isolated smooth muscles may be nonphysiological. Based on electrophysiological studies, it has been proposed that during nitrergic neurotransmission, NO is generated in the nerves and is a true inhibitory neurotransmitter [13][14]. The purpose of the present studies was to examine the effect of electrical field stimulation of mice gastric muscle strips: 1) on NO and Ca 2+ signals at the cellular level by fluorescent imaging; 2) on the effect of pharmacological treatments on these signals; 3) on nitrergic slow inhibitory junction potential (sIJP) in electrophysiological studies; and 4) to compare the effects of the pharmacological treatments on NO in the imaging studies and sIJP in the electrophysiological studies. These results provide, for the first time, visual identification of nerve varicosities in situ in the gut and also provides proof that on NANC nerve stimulation, NO is produced in the myenteric nitrergic nerve varicosities and not in the smooth muscle cells, thereby demonstrating that NO is a neurotransmitter rather than a mediator produced in the smooth muscles. They also document that during neurotransmission NO is produced de novo and not stored as a NO donor in the varicosities and released with other classical neurotransmitters. Changes in NO production in response to various pharmacological treatments correlated well with changes in slow inhibitory junction potential of smooth muscles. Functional studies combined with imaging may help elucidate whether NO production or its upstream or downstream signaling is the underlying mechanisms of impaired nitrergic neurotransmission in the pathological states. Visualization of myenteric nerve varicosities by their Ca 2+ signals We first sought to visualize varicosities of myenteric neurons in the mouse gastric smooth muscle. Imaging was focused on the varicosities because they are the sites of release of the neurotransmitters. Multiphoton imaging of circular muscle strips preloaded with the calcium indicator after EFS revealed discrete orange-red fluorescent spots. These images were superimposed on image of smooth muscles obtained in the transmission mode. Note that varicosities were linearly oriented along the longitudinal axis of the underlying muscle fibers (Figure 1). These fluorescent spots were not seen in tissues pretreated with tetrodotoxin, suggesting that they represented nerve varicosities. Figure 1a shows a low power view of the varicosities visible against the background of the non-fluorescent smooth muscle cells at a depth of 150 mm from the surface of the strip. Varicosities appeared as pearl-like structures that are linearly arranged along the axis of the underlying smooth Figure 1b shows a magnified view of an axon with the varicosities. The varicosities varied somewhat in their size and were on an average 2-4 mm62-3 mm and were separated from each other, with inter-varicosity interval varying from 2 mm to greater than 200 mm. Figure 1c shows intensity (height) and width of localized fluorescent calcium signals. These columns represent fluorescent signals from the varicosities. This view also shows that the varicosities are linearly arranged on an axon and are separated by inter-varicosity intervals. Ca 2+ signals identify all nitrergic and non-nitrergic varicosities. These studies show that multiphoton microscopy can vividly visualize varicosities on axons deep below the surface in intact tissue. Elevated Ca 2+ signals were not seen in smooth muscles because EFS was applied under nonadrenergic noncholinergic conditions to block muscle excitation. Visualization of NO in the varicosities To visualize nitrergic varicosities, we examined muscle strips preloaded with DAF-2 after applying EFS under NANC conditions. Green DAF-2T fluorescence represent NO signals ( Figure 2). Panel (2a) shows fluorescent green NO signals in nitrergic varicosities superimposed on the underlying smooth muscle layer imaged in the regular transmission mode. Note the absence of NO signals in the smooth muscle cells. The neurally released NO may diffuse into the postjunctional smooth muscles or ICCs to exert its effects on these structures. However, no NO signals were seen in the smooth muscles or ICCs, suggesting that the level of NO in the target tissue was below the threshold of detection and may have been consumed by its action on the target enzymes. We also examined NO signals in the strips preloaded with DAF-2DA but not electrically stimulated, tissues that received electrical stimulation and the tissue that were pretreated with L-NAME prior to EFS. Panel (2b) shows intensity (height and width) of the NO fluorescent signals from the varicosities. Note that very few NO signals were seen in the strips without EFS. The signals increased in the strips that received EFS and were again absent in the strips that receive EFS after L-NAME treatment. Panel (2c) shows relative quantification of the NO signals. The bar graphs Represents examples of intensity profiles of the NO signals in the unstimulated varicosities, the varicosities in the muscle strip stimulated with EFS and a muscle strip that was treated with L-NAME prior to the stimulus. (c) Represents relative quantification of the NO signals. Note very small NO signals in the unstimulated state and in stimulated state after L-NAME treatment. doi:10.1371/journal.pone.0004990.g002 revealed that NO signal was 1.560.25 in the basal state (unstimulated strips), 4.060.97 after EFS, and 1.060.03 in strips pretreated with NOS inhibitor, L-NAME (mean6SEM of normalized fluorescence intensity in arbitrary units, n = 6). Basal levels of NO may be generated by the tonic activity of the nitrergic neurons. These observations strongly suggest that the green signals are truly due to NO produced in the nerve varicosities. Colocalization of NO and Ca 2+ in the varicosities In order to identify whether NO signals were produced in prejunctional nitrergic nerve terminals that also showed Ca 2+ signals, we loaded the muscle strips with both DAF-2 and calcium orange and applied EFS. These strips were imaged for NO and Ca 2+ signals ( Figure 3). Top panel shows green NO signals and middle panel shows orange-red Ca 2+ in the varicosities. Bottom panel shows yellow color of the colocalized Ca 2+ and NO signals. Some varicosities showed only orange-red fluorescence without yellow fluorescence; these may represent non-nitrergic varicosities. Preliminary studies of serial 1 second imaging of calcium and NO signals showed that Ca 2+ signal appeared within 1 second of EFS and the NO signal followed it. Further dynamic studies using a calcium dye with fast kinetics are needed to fully document temporal relationship of the Ca 2+ and NO signals. Colocalization of NO and nNOS in the varicosities In order to identify whether NO signals were produced in prejunctional nitrergic nerve terminals, we applied EFS to the tissues that had been loaded with DAF-2. Since the reaction of DAF-2 with NO is irreversible, the fluorescent DAF-2T marker remained in the varicosities for a long time. These strips were then immunostained with anti-nNOS antibody. The muscles strips with DAF-2T marker and anti-nNOS staining were examined for colocalized fluorescence. NO signals were colocalized to the nerve terminals that showed immunoreactivity to nNOS, indicating that NO production occurred in the nitrergic nerve varicosities (Figure 4). These imaging studies provide visual proof that during nitrergic neurotransmission, nitric oxide is produced de novo in the nitrergic nerve varicosities. NO signals were not seen in the smooth muscle cells. Effect of various antagonists on Ca 2+ and NO signals in the muscle strips We also examined the effect of various known antagonists of nitrergic neurotransmission on Ca 2+ and NO signals in the electrically stimulated strips preloaded with DAF-2 and calcium orange. Table 1 summarizes the relative quantification of NO and Ca 2+ signals after various antagonist treatments. Note that EFS (control) increased NO and Ca 2+ signals. The elevation of NO and Ca 2+ signals were abolished by tetrodotoxin. Since tetrodotoxin blocks the fast sodium channel that mediates the action potential that is conducted along the axon and depolarizes the nerve varicosities to cause Ca 2+ influx [11], these results suggest that the EFS response was due to stimulation of cell bodies or fiber tracts rather than direct stimulation of the varicosities. Effect of EFS was blocked by the selective inhibitor of N-type Ca 2+ channels, v-CTX GVIA, so that no significant increase in Ca 2+ or NO signals were seen. However, L-type Ca 2+ channel blocker, nifedipine, did not alter the increases in Ca 2+ or NO signals. These observations indicate that Ca 2+ entry into the varicosities that stimulates NO production occurred via N-type Ca 2+ channels. Pretreatment of tissues with calmodulin (CaM) inhibitor W7 did not affect Ca 2+ increase, but markedly suppressed NO production by EFS , suggesting that increase in internal Ca 2+ stimulates nNOS via a Ca 2+ -CaM mediated process to produce NO. Similarly, pretreatment with the nNOS inhibitor L-NAME suppressed NO signals without affecting the Ca 2+ signals, showing that suppression of nNOS caused suppression of NO generation in the presence of normal increase in Ca 2+ upon electrical stimulation. Effect of various antagonists on the slow IJPs in mice gastric muscle strips In order to correlate pharmacology of imaging studies with functional neurophysiological studies of smooth muscle membrane potentials, we examined the effects of antagonists on the nitrergic slow inhibitory junction potentials (sIJP). EFS of muscle strips under NANC conditions produced two overlapping IJPs called the fast and the slow IJPs. Apamin treatment blocked fast IJP and revealed the nitrergic slow IJP [13,14,23]. The slow IJP was blocked by TTX, v-CTX GVIA as well as W7 and L-NAME, but was not affected by apamin or nifedipine. Figure 5 shows representative slow IJP and summarizes the quantitative data. Bars represent mean values6SEM (6 cells, n = 3 mice). These results show that antagonists of physiologic nitrergic slow IJP also suppress NO signals in the varicosities and these events can be documented using the imaging studies. Conclusions In conclusion, the unique chemical properties of NO and its indicator dyes and multiphoton microscopy allows imaging of NO during nitrergic neurotransmission that is not possible with many other neurotransmitters. These studies provide visual proof that NO is a true neurotransmitter and not a secondary mediator. Imaging of nitrergic neurotransmission may help distinguish between disorders due to impaired NO production such as nNOS deficiency and those due to impaired NO action such as seen in deficiencies of NO sensitive guanylyl cyclase [4], cGMP kinase1 [5], Collagen XIXa1 [6] or c-kit [24]. Simultaneous NO and Ca 2+ imaging studies combined with neurophysiology may also provide an important tool for understanding mechanisms of impaired nitrergic neurotransmission in motor disorders of the gut. Such studies may also help better define the underlying defect in nitrergic neurotransmission in conditions such as diabetic gastroparesis [7] and other human gastrointestinal diseases like achalasia [6] and abnormal gastrointestinal motility due to undefined cause. Simultaneous imaging of Ca 2+ and NO can also help document whether the suppressed nitrergic neurotransmission is due to abnormalities in calcium kinetics, CaM abnormalities or defects in the enzyme, nNOSa. Such studies may also be helpful in elucidating abnormalities in urinary tract and cerebral blood vessels where nitrergic neurotransmission is a major regulatory mechanism. Ethics Statement The experimental protocol used was approved by the Animal Care Committee of the VA Boston Healthcare System. Animals and Tissue Preparation CO 2 narcosis was used to euthanize adult male mice (22-38 g). Stomach was removed and 4-6 mm wide strips of smooth muscle layer were prepared after shearing the mucosa. The strips were transferred to a tissue bath with a Sylgard (Dow-Corning, Midland, MI) floor and pinned to the floor with mucosal surface facing up. The chamber was continuously perfused with warm oxygenated (95% O 2 /5% CO 2 ) Krebs solution at a rate of 3 ml/ min. The bath temperature was maintained at 3760.5uC and entire set-up was protected from light. Drugs and Chemicals The drugs and chemicals were obtained from Sigma (St Louis, MO) unless specified otherwise. They were prepared fresh before use Dye Loading Gastric muscle strips were mounted in a chamber and perfused with Krebs' solution prior to loading with dyes/drugs. Calcium orange-AM and/or DAF-2A were added 1 hour prior to EFS. The emission spectrum (576 nm) of Calcium orange can be well resolved from that of DAF-2T (515 nm), thus facilitating simultaneous imaging of the two components. Electrical Field Stimulation Tissues were incubated in fluorescent dyes for one hour prior to EFS and antagonists were applied 20-30 min prior to the dye loading. The EFS was applied under NANC conditions (in the presence of atropine (1 mM) and guanethidine (5 mM)) to block cholinergic and adrenergic responses to elicit nonadrenergic noncholinergic inhibitory responses. The EFS consisted of 3 stimulus trains of 0.5 sec each (square wave pulses of 1 ms at 20 Hz, 70 volts) applied 30 seconds apart. The tissues were immediately mounted on the slides and imaged promptly. Imaging with multiphoton microscopy Dye-loaded and treated tissues after EFS were imaged with BioRad MRC 1024ES multi-photon imaging system (BioRad, Hercules, CA). The imaging system was coupled with a mode-locked titanium:sapphire laser (Tsunami, Spectra-Physics, Mountain View, CA) operating at 82 MHz repetition frequency, 80 fs pulse duration with a wavelength of 820 nm. Tri/sapphire laser tuned to 820 nm in multi-photon excitation mode at 700 mW was able to excite calcium orange-Ca 2+ (549 nm) and DAF-2T (495 nm) dyes to consistently generate measurable emissions in orange-red (576 nm) and green (515 nm) regions of the spectrum. The average laser power delivered to the sample was 70-150 mW. Narrow band pass filters were used to separate the emission spectra of the two dyes. A Zeiss Axiovert S100 inverted microscope equipped with a high quality water immersion 406/1.2 NA, C-apochroma objective was used in the epifluorescence and/or transmission mode to image the nerve varicosities. The 5126512 pixel images were collected in direct detection configuration at a pixel resolution of 0.484 mm with a Kalman-5 collection filter. The nerve varicosities were identified by Z scanning the circular smooth muscle layer and were generally at depths of 150 mm. The images were reconstructed using the BioRad LaserSharp software. Relative quantification of Ca 2+ and NO signals Impaired nitrergic neurotransmission may be associated with reduced Ca 2+ response, normal Ca 2+ response but reduced NO response or normal Ca 2+ and NO responses to EFS. We determined the relative changes in calcium orange-Ca 2+ and DAF-NO fluorescence, comparing the intensities of the signals in the antagonist treated, EFS stimulated tissues with the unstimulated control tissues. Three to six boundaries were drawn around arbitrary areas along the nerve varicosities identified by XYZ scanning in a field of view at 3206 magnification using an image processor (LaserSharp, BioRad). Fluorescence intensity was integrated over all pixels within the boundary of each individual enclosed area and quantified using LaserSharp (BioRad) and MetaMorph (Universal Imaging, West Chester, PA). The data are presented as the average of at least three blinded experiments performed on different days. Immunolabeling with anti-nNOS antibody For colocalization of NO and nNOS, muscle strips were loaded with the NO indicator, DAF-2DA and EFS applied as described above. The tissues were then fixed in 4% freshly prepared formaldehyde in PBS and were labeled with rabbit anti-nNOS antibody. Slow IJP Recordings Intracellular membrane potential recordings under NANC conditions were made using sharp microelectrodes with high Figure 5. Effect of various antagonists on the slow IJPs in mice gastric muscle strips. The slow IJP was produced in response to EFS under NANC conditions. Note that the slow IJP was suppressed by pretreatment with TTX and v-CTX GVIA but nifedipine had no effect (data not shown). Moreover, calmodulin antagonist, W7 and nNOS inhibitor, L-NAME also blocked the slow IJP. Representative sIJPs are shown at the top of the bar of the respective treatment groups. doi:10.1371/journal.pone.0004990.g005 input impedance as described in details in our previous publications [13,14,23]. Statistics Data were expressed as means6SEM and appropriate tests done to compare significance of difference in means (Student's t test and multiple comparisons, respectively).
2016-05-02T18:17:47.175Z
2009-04-02T00:00:00.000
{ "year": 2009, "sha1": "16e513f57152397a447492cf6e42838cf1efd24b", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004990&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "924aa3173d2c1b4f7be1c385c5083f270ee303ee", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
267782644
pes2o/s2orc
v3-fos-license
Random forests for detecting weak signals and extracting physical information: a case study of magnetic navigation It was recently demonstrated that two machine-learning architectures, reservoir computing and time-delayed feed-forward neural networks, can be exploited for detecting the Earth's anomaly magnetic field immersed in overwhelming complex signals for magnetic navigation in a GPS-denied environment. The accuracy of the detected anomaly field corresponds to a positioning accuracy in the range of 10 to 40 meters. To increase the accuracy and reduce the uncertainty of weak signal detection as well as to directly obtain the position information, we exploit the machine-learning model of random forests that combines the output of multiple decision trees to give optimal values of the physical quantities of interest. In particular, from time-series data gathered from the cockpit of a flying airplane during various maneuvering stages, where strong background complex signals are caused by other elements of the Earth's magnetic field and the fields produced by the electronic systems in the cockpit, we demonstrate that the random-forest algorithm performs remarkably well in detecting the weak anomaly field and in filtering the position of the aircraft. With the aid of the conventional inertial navigation system, the positioning error can be reduced to less than 10 meters. We also find that, contrary to the conventional wisdom, the classic Tolles-Lawson model for calibrating and removing the magnetic field generated by the body of the aircraft is not necessary and may even be detrimental for the success of the random-forest method. I. INTRODUCTION Random forest 1-3 is a supervised machine-learning method for solving classification and regression problems based on noisy feature data.The "forest" consists of a number decision trees, each trained using a different subset of the characteristics and data points.A decision tree 4,5 recursively splits the feature space into two halves, which is done for a specific feature input using a threshold and the depth of the tree is the number of available input feature signals.At the end of the tree construction, the whole feature space has been divided into a large number of small subspaces, each associated with a particular value of the physical quantity of interest (the target variable), leading to a "leaf."Given a number of feature signals in the form of, e.g., time series, and the value of the target variable, supervised training can be done through some standard optimization method to determine a proper set of the threshold values required for splitting the feature signals.After training is done, when a new set of feature signals is presented to the tree, a branch of the tree can be quickly identified which closely matches the corresponding feature values, and the leaf end of this branch gives the predicted value of the physical quantity for the particular set of feature values.In principle, a decision tree can be used for predicting the values of the target variable corresponding to different combinations of the values of the feature signals, but a single tree is susceptible to overfitting, especially when the feature space becomes large.Random forest solves this overfitting a) Electronic mail: Ying-Cheng.Lai@asu.eduproblem by combining a number of random decision trees, each responsible for a subset of the features and a portion of the training data. More specifically, the working of the random-forest algorithm can be described, as follows.For each tree in the forest, a subset of the features and a portion of the training data are randomly chosen, where each tree is given a slightly different perspective on the data, known as as bootstrap sampling 6 .The algorithm then creates a decision tree utilizing the characteristics and data that have been chosen, using a splitting criterion such as the Gini index or information gain.The new tree is then added to the forest, and the process is repeated until a pre-specified requirement is met, at which point no more trees are produced.During the testing or prediction phase, the model combines the predictions from all the trees in the forest to provide a forecast.The predicted value of the target variable is the average or median of the predictions of all trees in the ensemble.Random forest is resilient to noise and outliers, and it alleviates overfitting since the ensemble of trees balances out the noise and variability in the data.Moreover, random forests is capable of handling missing data and can offer metrics of feature relevance.Random forest regression has been applied in various fields, such as healthcare 7,8 and transportation 9,10 , for tasks such as predicting traffic flow 11 and forecasting commodity prices 12 .Various extensions and modifications of the random-forest algorithm, such as extremely randomized trees 13 and quantile regression forests 14 , have been proposed to further improve the accuracy and robustness of the method. In this paper, we exploit random forests for accurately detection of weak physical signals and for predicting the values of a small number of physical variables of interest based on arXiv:2402.14131v1[eess.SP] 21 Feb 2024 a relatively large number of noisy feature signals.In particular, we consider the situation where the feature signals can be continuously measured at all times, but the weak signal and the target variables can be assessed only in a special and well controlled environment with certain additional measurements -the calibration phase.The question is, in the deployment phase where the additional measurements for extracting the weak signal are no longer available and the target variables are not accessible any more, whether the weak signal and the variables can be predicted based on the available feature signals? A directly relevant application is magnetic navigation in a GPS (global positioning system) denied environment, where the goal is to use the Earth's anomaly magnetic field for precise positioning of an airplane, as this field is positiondependent 16,17 .In such an application, various sensors in the cockpit of the the airplane are employed to generate a large number of feature signals for extracting the Earth's anomaly magnetic field through complex mathematical algorithms.With a predetermined map between the anomaly field and the position for the flying region, if the anomaly field can be accurately detected, precise positioning can be achieved. A key challenge is that the anomaly magnetic field is weak and the feature signals from the sensors are overwhelmingly noisy due to the extensive electronic equipment in the cockpit and the other (dominant) components of the Earth's magnetic field.The problem then becomes one of detecting a weak signal from a noisy background that can be several orders of magnitude stronger than the signal and determining the components of the instantaneous position vector of the airplane (the target variables).In this regard, recently a machine-learning scheme based on reservoir computing or time-delayed feed-forward neural networks was developed to detect the Earth's anomaly field, with the implied equivalent positioning accuracy in the range of 10 to 40 meters 18 . Here, we articulate a random-forest based machinelearning scheme to detect the Earth anomaly magnetic field and simultaneously to determine the position of the flying airplane.Different from the previous work 18 , we do not combine machine learning with other widely used calibration methods such as the standard Tolles-Lawson (TL) model.Rather, in the whole process, only the machine-learning model is employed to generate the anomaly field signal and the positioning information based on the input feature signals.From time-series data gathered from the cockpit of an airplane, we demonstrate that the random-forest algorithm performs remarkably well in filtering Earth's anomaly magnetic field and generating the instantaneous position of the aircraft.With the aid of the conventional inertial navigation system (INS), the positioning error can be reduced to less than 10 meters with negligible standard deviation or uncertainty.We also find that, contrary to the conventional wisdom, the classic TL model for calibrating and removing the magnetic field generated by the body of the aircraft is not necessary and may even be detrimental for the success of the random-forest method. We remark that, for the task of precise positioning, our proposed random-forest method in fact does not require the use of INS sensors, for three reasons.First, the INS sensors have a limited range, making it impossible for them to be used for long-distance navigation.Second, due to the accumulation of minute inaccuracies in the readings, the INS sensors are subject to drift errors that reduce the accuracy over time.Third, connecting INS sensors with other navigation systems may be difficult and complex. We wish to emphasize the dynamical nature of the complex signals in our study.The measured signal comprises various components, including the weak signal generated by the Earth's anomaly magnetic field (the target signal to be detected), the signal from the other (dominant) components of the Earth's magnetic field, and the signals generated by the electronic equipment within the airplane cockpit.While the signals other than the target signal represent some kind of "noises" to be removed and are typically much stronger, they are in fact complex signals with their own dynamics and time scales.To correctly describe these signals that need to be removed, we use the term "strong complex signals" or "overwhelming complex signals."It is the dynamical nature of these strong signals to be removed which makes the randomforest approach effective.We note a recent work demonstrating that noises in the conventional sense can be filtered out by machine-learning schemes such as reservoir computing 15 . In Sec.II, we provide a brief overview of the background of magnetic navigation and the TL model for flight magneticfield calibration and some previous machine-learning methods.In Sec.III, we describe the flight datasets, machinelearning methods employed in our study (for the purpose of performance comparison), simulation and data-preprocessing details.Section IV presents results on feature selection, detection of the anomaly magnetic field, and precise positioning.A discussion and potential future research conclude the paper in Sec.V. A. Earth's anomaly magnetic field for navigation in a GPS-denied environment The Earth's magnetic field, or the geomagnetic field, is comprised of several field components 19 .The main component is the core field generated by the motion of molten iron in the Earth's outer core, with its magnitude ranging from 25 to 65 micro-Tesla at the surface of the Earth.While the core field makes compasses point north and is responsible for geophysical phenomena such as the auroras, its magnitude is still quite weak: about 25-65 micro-Tesla at the surface of the Earth, which is about 100 times weaker than a refrigerator magnet.The second component is the crustal anomaly field generated by the Earth's crust and upper mantle, whose magnitude is about 100 nano-Tesla, which is about 100 times weaker than the core field.While the core field is the dominant component of the geomagnetic field, it is weakly timedependent and is not sensitive to changes in the position, rendering it unsuitable for precise positioning and navigation.In contrast, the anomaly field is position-dependent and has much stronger spatial variations than the core field.Conse-quently, in principle it is possible to use exploit the anomaly field for navigation. The widely used GPS can achieve the positioning accuracy of less than 10 m worldwide.However, because GPS signals are weak electromagnetic signals and must be transmitted over long distances, it is vulnerable to external interference such as jamming or spoofing 20 .In a GPS-denied environment, alternative navigation systems are needed for positioning, which include radio-based navigation 21 , computer-vision-based navigation 22 , star-trackers 23 , terrain height matching 24 , and gravity gradiometry 25 .Despite the outstanding performance of these navigation approaches in some specific scenarios, they are unable to work universally under different circumstances.For instance, terrainaided navigation relies on the unique features of the terrain, which will lose efficacy when working around oceans and deserts, and star-trackers rely on the stars, so it is not workable during the day or in cloudy weather.Different from these methods, the Earth's anomaly field is approximately time-invariant but strongly spatially variant, making magnetic navigation an appealing alternative 16,17 to GPS.Indeed, the anomaly-field based magnetic navigation is limited neither to terrains now to the time of the day.Another advantage is that, unlike active navigation such as GPS, magnetic navigation is a kind of passive navigation and, due to the power of the magnetic field decreasing as d −3 with distance d, it is not practically possible to disrupt a magnetic navigation device through jamming 26 .A great challenge of magnetic navigation is that the anomaly field is extremely weak and usually is embedded in an overwhelmingly strong noisy background.For example, in the cockpit of a flying aircraft, various types of electronic devices are in active operation 27 .To make magnetic navigation feasible, extracting the weak anomaly-field signal from strong complex signal is essential.With the availability of a predetermined magnetic map, the extracted clean anomalyfield signal can be used to determine the instantaneous position of the aircraft, possibly with the aid of a standard INS 28 . B. Tolles-Lawson model To realize magnetic navigation, effective methods to extract the anomaly magnetic-field signal and to obtain the real positioning information are needed.For a flying airplane, the magnetic field generated by the body of the aircraft must be removed from the measured signal to yield the Earth's magnetic field.The Tolles-Lawson (TL) model [29][30][31] is a linear aeromagnetic compensation method that estimates the magnetic field generated by the aircraft from the total measured magnetic field.When the aircraft is flying ideally in a "magnetically quiet" mode, e.g., there are only limited radio transmissions 32 , the TL model performs well in extracting the anomaly field from the signals measured by magnetometers placed on the exterior surface of the airplane.However, when the magnetometers are placed inside the cockpit of the airplane, the TL model will not be sufficient to remove the overwhelming complex signals.In spite of this, the TL model still represents a state-of-the-art model to calibrate the anomaly field through magnetometers placed outside the airplane and for pre-filtering the data. C. Previous machine-learning methods The last thirty years have witnessed the use of machine learning for magnetic navigation.Earlier, neural networks were proposed 33 as a model-free method for aeromagnetic calibration.About three years ago, hundreds complicated neural networks were trained and it was demonstrated 34 that the nonlinear machine-learning method is capable of reducing the external added noise and extracting the magnetic anomaly signal.It was proposed 28 recently that the anomaly field can be extracted with small errors by combining the TL model and machine learning.In particular, an extended Kalman filter was used to demonstrate that the extracted anomaly field can lead to low positioning errors.More recently, two machine-learning methods, one based on recurrent neural networks and another using feed forward neural networks with time-delayed inputs, in combination with the TL model, were articulated 18 for detecting the weak anomaly field from measurements performed inside the cockpit of a flying airplane. A. Data source The datasets used in our work come from the signal enhancement for the magnetic navigation challenge problem 19 .The goal was to extract a "clean" magnetic anomaly field from measured complex signals by using a trained neural network.To achieve this, several test flights were conducted, each containing several segments (or lines).Five magnetometers were used to record the magnetic-field signals, one placed at the tail stinger of the aircraft (in a "magnetically quiet" mode), as shown in Fig. 1.In particular, the tail-stinger signal was calibrated by the TL model, resulting in the ground truth of the true magnetic anomaly field.In our study, GPS signals were also used to provide the positioning information, representing the "ground truth" of the aircraft position. B. Three machine-learning methods used in this study The problem to extract the magnetic anomaly signal from noisy measurements belongs to signal filtering.Classical signal-processing methods such as linear filters or wavelet transformations are not suitable because the frequency bands of the embedded signal and strong complex signals overlap completely.Machine learning provides a potent and automated way of signal filtering, making it feasible to extract useful information from complex signals.In general, neural networks can be powerful for processing complicated signals with nonlinear properties.For example, convolutional neural networks (CNNs) [35][36][37] have been used in signal filtering tasks including denoising electroencephalography (EEG) signals or removing motion artifacts from magnetic resonance imaging (MRI) data 38 .Here, we describe three machine-learning methods used in our work. K-nearest neighbor (KNN) approach: The KNN approach 39 is a non-parametric method used for regression and classification.The value of an instance in KNN regression is determined by the mean or median of the k closest instances.The user selecting the parameter k establishes how many nearest neighbors should be utilized to produce a fore-cast.This enables the method to model nonlinear connections and capture complex patterns in data without making any assumptions about the distribution of the underlying data -a key benefit for certain problems [40][41][42] .The KNN method has several limitations, such as sensitivity to the choice of the distance metric, curse of dimensionality, and high computational complexity.To address these limitations, the decision-tree and random-forest methods can be used. Decision tree: A decision tree is also a regressor and classifier that requires iteratively segmenting the feature space into regions and constructing a tree-like structure performing prediction for each smaller portion 4,5 .The feature space is recursively split into two halves and a tree is constructed using the features that provide the largest sum-squared error reduction.The resulting structure is a binary tree, with the leaves designating the final regions and the projected value being the mean or median of the target variable for the data points in that region.One advantage of decision tree regression is interpretability since the resulting tree structure can be visualized for understanding.Decision trees can also handle nonlinear and multi-dimensional data and are robust to noise and outliers.Yet, decision tree regression is susceptible to overfitting when the tree becomes complex and the data are noisy.To overcome the overfitting problem, pruning, early pausing, and regularization can be used to improve the generalization performance of a decision tree.Decision tree regression has been applied in a variety of fields, including engineering, finance, and environmental sciences, for applications such as building energy consumption modeling and air pollution forecasting [43][44][45][46][47] . Random forest: As shown in Fig. 2, a random forest is an ensemble of decision trees 1-3 , each trained using a different subset of characteristic features and data points.Techniques such as weighted voting or stacking can also be employed for constructing a random forest.A key benefit of a random forest is its ability to prevent overfitting since the ensemble of trees balances out the noise and variability in the data.Random forest is also resilient against noise and outliers.However, when there are too many or too few trees in the ensemble, a random forest may suffer from bias and correlation and may not work well with data that has complex dependencies or nonlinear interactions. C. Simulation hardware and software Our simulations were carried out on a desktop system with one NVIDIA GeForce GTX 750 Ti GPU, an Intel Core i7-6850K CPU @ 3.60GHZ, and 128 GB of RAM.During the training process, the n jobs keyword (from the model.f it method) was set to −1 so the simulations were done using all 32 available logical cores.All codes were written in Python, where we used sklearn -a machine learning python package -to train and test our algorithms. D. Data preprocessing We use real data selected from several flights (number 1002 to 1007) conducted by Sanders Geophysics Ltd. (SGL) near Ottawa, Canada.For instance, the dataset of flight 1002 consists of 207580 instances with the sampling time dt = 0.1s, each comprised of 102 features from a collection of various sensor measurements, which are voltage, current, magnetic and other sensors as well as the positions, INS, and avionics systems readings.The position of the aircraft is derived from WGS xyz coordinates included in the dataset which are GPS positions and is the predicted target of the model, while the other features [from selected features or those from a principal-component analysis (PCA)] are used as the inputs to the random-forest models.For random-forest models, the number of estimators (trees) is selected to be 100.The dataset is used to train the models in order to filter the position out of the available sensor data.The performance is evaluated using the predicted root-mean square errors (RMSEs) on a head-out unseen test set. The original data contained missing values, outliers, and other anomalies, rendering necessary extensive preprocessing.In general, machine learning methods require normalizing the data as an essential pre-processing step 48 .A simple method is to normalize the dataset so that all the features have zero mean and unit variance.Another method is minmax, where each feature is scaled individually such that it falls in a given range of the training set, e.g., between zero and one.Normalization in signal filtering also helps in removing bias in the signal that may be brought on by variations in scale or units among the features.In our case, normalizing the data will ensure that any two features with different scales, e.g., one measured in nano Tesla and the other in Ampere, have the same scale.We find that, for our datasets, the minmax scalar method outperforms the standard scalar normalization.Because of the remaining variance after minmax scaling, it is useful to remove features with low variance since they contribute little to the process of filtering 49 .This can be done by setting a proper threshold value of the variance.Removing the features with variances below the threshold brings additional benefits such as speeding up feature selection, reducing overfitting, and making the model and results more interpretable.Table I sorts the features of the normalized flight data (including flight number 1002 to 1007) in terms of their standard deviation.As a reasonable assumption, the exclusion variance threshold is set as 0.0025.For the flight data, this means that the features cur f lap, mag 2 uc, and cur com 1 are removed at this step. A. Feature selection A key step is sequential feature selection in which features are added to or removed from the features set in accordance with a predefined criterion such as RMSE, R2, or F 1 score (depending on the application) until an optimal subset of features is found 50 .Here, optimality means finding the smallest set of features leading to the required predictive performance, which not only makes the model more understandable, but also reduces overfitting, increases prediction accuracy, and results in a computationally efficient training process.Given a set of features, optimal selection can be performed forward, backward, or a mix of both.Because of the relatively large number of features in our dataset, we exploit the forward selection algorithm, which means that, at each step, the algorithm selects the best feature to add or remove based on the cross-validation score of the trained random-forest model.Our computation leads to 12 features: mag 3 uc, mag 4 uc, mag 5 uc, diurnal, f lux b x, f lux b y, f lux c y, ins vw, ins wander, static p, total p, and vol srvo.Note that the ins lon, ins lat, and ins alt features are excluded from the pool because they are equivalent to the target positions, as shown in Fig. 3.For the INS free method, the ins vw We apply PCA to further process the feature data, where the components are sorted by the eigenvectors in the order of their corresponding eigenvalues from the highest to the lowest.There are two reasons for the PCA analysis.First, for the selected features to be effective, the correlations among them cannot be too high nor too low.In particular, high correlations can produce "multicollinearity," where numerous independent variables in a model are interrelated, making it difficult to predict the individual impact of each feature on the dependent variable.Figure 3 displays the correlation among the selected features, where it can be seen that the features associated with the INS sensors have high correlations.For our magnetic navigation problem, the degree and direction of the association among the sensor readings and the aircraft position are affected by feature correlations.The highly correlated features can be integrated into new, uncorrelated features named principal components through PCA for dimension reduction, which can then be used as inputs to our machine-learning based filter to remove the strong complex signals and redundant data.Second, PCA can improve vi-sualization and help better understand the data by transforming the original high-dimensional data into low-dimensional ones.To establish a proper trade-off between dimensionality reduction and information retention, we consider the number of main components beyond which feature reduction can lead to information loss.Figures 4(a We remark that nonlinear feature reduction methods such as Isomap and Kernel PCA are not suitable for our problem because of the large sizes of the datasets.In particular, to compute the kernel matrix of size (data samples, data samples), it is necessary to compute and store data samples 2 number of terms.For our datasets, this requires about 312GB of computer RAM.A potential solution is to perform clustering on the dataset and fill the kernel with the means of those clusters.However, even this approach might produce a large kernel matrix.We demonstrate the power of our random-forest method to detect weak anomaly field from data.For comparison, we also display the results from the KNN and decision-tree meth-ods.Each machine-learning model uses the chosen optimal set of 12 features as input and the magnetic anomaly field signal as the output.To ensure a fair comparison, we perform hyperparameter tuning and calculate the average RMSE of the magnetic anomaly field for different flight lines.Figure 5 shows that, for our random-forest model, the average test RMSE is about 1.9 nT for all flights.Table III lists the average RMSEs from the different machine-learning methods.It can be seen that the KNN method has the largest error, the decision tree method suffers from overfitting but it is overcome by the random-forest method.These results indicate that random forests is a reliable approach for detecting weak magnetic anomaly fields from data. C. Random-forest based precise positioning For navigation positioning, to ensure that the appropriate features are selected, we carry out a feature-importance analysis.For a tree based method such as decision tree and random forests, it is practically impossible to analyze the feature importance by the model weights.Feature importance analysis is crucial for decision trees and random forests.Traditional metrics such as Gini impurity 51 and mean decrease in impurity 52 are often used.Combining these techniques with domain expertise and cross-validation can lead to a more comprehensive understanding of feature importance.We use two standard methods: permutation and dropping 53 .In features permutation, the performance of the model is assessed by randomly permuting the values of a single feature and comparing the results with a baseline model: the more the performance is degraded, the more important that feature is.In the feature-dropping approach, the model's performance is assessed by eliminating one feature at a time and monitoring how the result changes.Each feature's importance is on this topic based on the machine-learning methods of reservoir computing and feed forward neural networks 18 , the average RMSEs from the random-forest method are reduced by over 100%. deduced from the performance drop that results from the removal of that certain feature.These methods are computationally efficient for large datasets, and they serve to evaluate the significance of each feature without any knowledge about the intrinsic dynamics of the trained machine learning model, making them suitable for a variety of applications.Figures 6 and 7 illustrate the feature importance analysis while performing the random-forest filtering of the position of the aircraft using the selected features in Tab.II.It can be seen that, for the INS free case, both the permutation and dropping methods give that the diurnal feature is the most significant.Note that the diurnal feature can only be measured independently with an external base-station and won't generally be available on the aircraft.There are some approaches to modeling it based on past history, or using a statistical approach that could be used.For the INS aided model, FIG.6: Importance analysis of selected features in the absence of INS.Random forest filtering of the position of the aircraft using the selected features in Tab.II is performed on a testing dataset.Both the permutation and dropping methods give that the feature diurnal is the most significant.Other selected features are less significant, and removing the vol srvo feature can improve the performance. the ins wander feature is the most important, since the performance degrades significantly in its absence compared to the baseline method. We compare the performance of three methods: KNN, decision tree, and random forests, in positioning using both the selected features (Tab.II) and PCA features in terms of RM-SEs from training and testing sets on different flight datasets.An important initial step is to determine the hyperparameter values.In general machine learning, the hyperparameters are those that cannot be learned from the data but must be pre-determined before training, the selection of which is critical to the success of machine learning.For the decision tree and KNN methods, we use PyGad 54 to tune the hyperparameter values.The random-forest method has a single hyperparameter -max depth.Figure 8 shows the RMSE versus max depth, which gives that the optimal value of FIG.7: Importance analysis of selected features with INS. The position of the aircraft was filtered using a random-forest algorithm on a test set utilizing the selected features in Tab.II.Both the permutation and dropping methods give that the ins wander feature is the most significant.The second most significant feature is diurnal.max depth is around 25. Shown is the RMSE versus max depth, the depth of the forest (single hyperparameter).The optimal value of max depth is around 25. Table IV summarizes the best-performance results with INS data in terms of DRMS, the distance between the actual and predicted positions of the aircraft.The all flight line means that the entire dataset in the training and testing is used, whereas all but 1005 denotes that the entire dataset except that from flight number 1005 is used for training but the test is performed on the entire dataset including flight number 1005.The results show that in all cases, the selected features perform better than the PCA features.Remarkably, random forests outperforms the other two methods in the testing DRMS.For the decision-tree method, overfitting is pronounced because it performs better than random forests in training but worse in testing.Figure 9 shows the results of V. DISCUSSION Exploiting machine learning to detect weak physical signals immersed in strong complex signals has attracted a recent interest with applications such as extracting the Earth's anomaly magnetic field from overwhelmingly noisy signals collected from the magnetic sensors installed inside the cockpit of a flying airplane.Previously, the specific machine-learning architectures explored for this application were reservoir computing (recurrent neural networks) and time-delayed feed forward neural networks 18 .It was demon- strated that, when combined with the classical TL model for removing the aircraft magnetic field, the weak anomaly magnetic field can be reliably detected.Detecting the anomaly field has a direct application in magnetic navigation in a GPSdenied environment 16,17 .For the navigation problem, the goal is precise positioning: obtaining the instantaneous position of the flying airplane, a problem that was not addressed in the recent work 18 , raising the need for a more general approach to magnetic navigation.The present work develops a random-forest based machine-learning approach to magnetic navigation, where the cific value of the anomaly magnetic field and the position of the airplane.In the testing phase when the information about the anomaly field and the position is not available, from a set of features signals, a well-trained random forest can be efficiently searched to yield the anomaly field and the position.As many available feature signals contain redundant information, we performed a process to select the optimal set of feature signals so as to greatly improve the computational efficiency.As random forests is essentially an "intelligent" book-keeping machine, all required for training is the selected set of feature signals and the corresponding anomaly field and position, thereby removing the need of calibration methods, e.g., the TL model.Indeed, we demonstrated that high accuracies in the anomaly field and position can be achieved even without TL calibration. More generally, the random-forest framework developed here can be applied to signal filtering, an important task in many fields including image processing, speech recognition, and economic forecasting.There are many challenges in designing effective filtering algorithms, such as dealing with noise, nonstationary signals, and nonlinear dependencies.The success of deploying random forests to magnetic navigation reported here can serve as a starting point to generalize the machine-learning model to other applications.For the broad task of signal filtering, a potential future research direction could be to investigate deep learning methods, such as convolutional neural networks, specially temporal graph convolutional neural networks 55 .Additionally, transfer learning could be explored, where a pre-trained model is fine-tuned on a new dataset for signal filtering.In this case, the knowledge learned from previous flights can be employed in learning new flights data, thereby requiring shorter flight times.Furthermore, the development of online learning algorithms for signal filtering could be explored, which could adapt to changes in the signal over time. FIG. 1: Configuration of magnetometers on the aircraft.The signals from the magnetometers inside the airplane contain strong complex signals produced by the various electronic devices.The signal from the magnetometer placed at the tail stinger is free from these overwhelming complex signals which, after the TL calibration, leads to the true magnetic anomaly field signal.The measurements from the other four magnetometers contain the anomaly field embedded in strong complex signals.The datasets are from test flights conducted by Sanders Geophysics Ltd. (SGL) near Ottawa, Canada. FIG.2:A schematic illustration of a random forest of decision trees.(a) A random forest and (b) a decision tree in the "forest."Prediction of the target variable of interest is achieved by taking the average of median of the predictions from all the trees in the forest. FIG. 3 : FIG. 3: Correlation among the selected features in TableII.The information from the correlation is used to prune the redundant features.High correlation among the features can produce "multicollinearity," where numerous independent variables in a model are interrelated, making it challenging to predict the individual impact of each feature on the dependent variable. ) and 4(b) show the first two and three components of PCA applied to the set of selected features (without TL and INS), respectively, where the color elements represent the normalized Euclidean distance of the dataset instances from the origin.It can be seen from the PCA components that there exist distinct clusters in the data in terms of the Euclidean distance. FIG. 4: Colored representation of the normalized Euclidean distance of the dataset instances from the origin.Distinct clusters in the data in relative Euclidean distance can be observed. FIG. 5 : FIG. 5: Results of random-forest based detection of weak signals.Selected features are used to detect the weak anomaly magnetic field signal.Compared with a recent workon this topic based on the machine-learning methods of reservoir computing and feed forward neural networks18 , the average RMSEs from the random-forest method are reduced by over 100%. FIG. 9 : FIG. 9: Results from the TL INS method.The compensated values from the magnetic sensors, the Tolles-Lawson model, and INS sensors are used.The comparison between the TL INS free and INS aided methods suggests that including the TL model tends to degrade the positioning performance. FIG. 10 : FIG. 10: Results from the INS aided method.The uncompensated values from the magnetic sensors and INS sensors are used.The INS aided method gives the best performance. TABLE I : Standard deviations of the features of entire flight dataset TABLE II : Selected features using forward sequential feature selection on the entire flight dataset TABLE III : Performance comparison among KNN, decision-tree, and random-forest methods in terms of RMSEs using selected features to detect weak anomaly magnetic field signal (in units of nT) TABLE IV : Performance comparison in terms of DRMS among KNN, decision tree, and random-forest methods using PCA and selected features for different flight datasets using the INS aided method (in units of meters) TABLE V : Mean and standard deviation of RMSE from the entire experiment for different methods (in units of meters)
2024-02-23T06:44:59.723Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "f443a2169fbe286458782cf94271222113a40cb2", "oa_license": "CCBY", "oa_url": "https://pubs.aip.org/aip/aml/article-pdf/doi/10.1063/5.0189564/19730114/016118_1_5.0189564.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "f443a2169fbe286458782cf94271222113a40cb2", "s2fieldsofstudy": [ "Physics", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Physics" ] }
44035695
pes2o/s2orc
v3-fos-license
Cardioprotective effect of hydroalcoholic extract of Tecoma stans flowers against isoproterenol induced myocardial infarction in rats Cardiovascular diseases (CVD) remain the principle cause of death in both developed and developing countries, accounting for roughly 20% of all annual deaths worldwide. Myocardial infarction is the rapid development of myocardial necrosis caused by critical imbalance between oxygen supply and demand of the myocardium. The increased generation of toxic reactive oxygen species (ROS) such as O2 , H2O2, OH etc. exerts severe oxidative stress on myocardium predisposing to CVD such as ischemic heart disease, atherosclerosis, conges t ive hear t f a i lu re , ca rd iomyopa thy and arrhythmias[1,2]. PEER REVIEW ABSTRACT Introduction Cardiovascular diseases (CVD) remain the principle cause of death in both developed and developing countries, accounting for roughly 20% of all annual deaths worldwide. Myocardial infarction is the rapid development of myocardial necrosis caused by critical imbalance between oxygen supply and demand of the myocardium. The increased generation of toxic reactive oxygen species (ROS) such as O 2 -, H 2 O 2 , OHetc. exerts severe oxidative stress on myocardium predisposing to CVD such as ischemic heart disease, atherosclerosis, c o n g e s t i v e h e a r t f a i l u r e , c a r d i o m y o p a t h y a n d arrhythmias [1,2] . S379 Ischemic heart disease is the leading cause of morbidity and mortality worldwide and according to the World Health Organization it will be the major cause of death in the world by the year 2020 [1] . Myocardial infarction results from the prolonged myocardial ischemia with necrosis of myocytes due to interruption of blood supply to an area of heart [2] . CVDs are the secondary cause of deaths in many parts of the world, although modern drugs are effective in preventing the disorders, their use is often limited because of their side effects and adverse reactions. A wide array of plants and their active principles, with minimal side effects, provide an alternate therapy for ischemic heart disease [3] . Isoproterenol (ISO), a synthetic catecholamine and beta-adrenoceptor agonist, has been found to cause severe stress in the myocardium resulting in infarctlike necrosis [4] . It is also well known that ISO generates free radicals leading to lipid peroxidation, which cause irreversible damage to the myocardium [5] . Increase in formation of ROS during ischemia/reperfusion and the adverse effects of oxygen radicals on myocardium have been well established by both direct and indirect parameters. Thus, Isoproterenol causes loss of functional integrity and necrotic lesion in heart muscle [6] . Tecoma stans (T. stans) locally available has been used in traditional system of medicine for treating diabetes mellitus, bacterial infections [7][8][9] , arterial hypotension, gastrointestinal tract disorders and various cancers. The plant is an effective remedy for snake and rat bites. It is also used as vermifuge and tonic [10,11] . The literature revealed the presence of triterpenes, hydrocarbons, resins and volatile oils. The flower contains flavonoids, tannins, traces of saponins, alkaloids, tecomine, tecostidine, beta carotene and zeaxanthin [12,13] . Inspite of phytoantioxidants abundance there is no scientific information available regarding cardioprotective effect, hence in the present study an attempt was made to investigate the protective effect of 70% ethanolic extract of T. stans flowers against experimentally induced myocardial infarction in rats. Plant material and extraction Flowers of T. stans were collected from Bettada M a l l e s h w a r a t e m p l e , K u m a r a n a h a l l i V i l l a g e i n Davanagere, and authenticated by Professor K. Prabhu, Department of Pharmacognosy, SCS College of Pharmacy, Harapanahalli. A herbarium specimen SCSCOP.Ph.Col Herb.No. 012/2006-2007 is preserved in the college museum. The dried powder of the flower was defatted with pet and ether, and then extracted with 70% ethanol using Soxhlet apparatus. The extract was concentrated under reduced pressure using rota flash evaporator and stored in airtight container in refrigerator below 10 °C. The same method was used for pharmacological investigations, after subjecting it to preliminary qualitative phytochemical studies. Chemical ISO was procured from the Sigma-Aldrich Chemicals Ltd, St. Louis, USA, the biochemical kits from Erba Manheim, Germany. All the chemicals used were of analytical grade. Animals Wister albino rats (weighing 150-250 g) and albino mice (weighing 20-25 g) of either sex were used in this study. They were procured from Sri Venkateshwara Enterprises, Bengaluru. The animals were acclimatized for one week under laboratory conditions. They were housed in polypropylene cages and maintained at (27依2) °C under 12 h dark/light cycle. They were fed with standard rat feed (Gold Mohur Lipton India Ltd.) and water ad asbitsium was provided. The husk in the cages was renewed thrice a week to ensure hygiene and maximum comfort for animals. Ethical clearance for handling the animals was obtained from the Institutional Animal Ethical Committee prior to the beginning of the project work, registration no SCSCOP/665/2008-09 dated 24.11.2008. Induction of myocardial injury A total of 30 healthy rats of 200-250 g were randomly allotted into 5 groups of 6 animals. In group I animals were treated as negative control and were fed with normal saline for 16 d. The animals of group II, III, IV and V were fed with normal saline, simvastatin (60 mg/kg), 70% ethanolic extract of T. stans flowers (250 mg/kg) and 70% ethanolic extract of T. stans flowers (500mg/kg) daily, p.o, respectively for 16 d. Then, animals of all the groups were given ISO (200 mg/kg), s.c. for two consecutive days at 24 h interval. At the end of experimental period (after 24 h of second ISO injection or the 16th day of extract/vehicle treatment), all the rats were anaesthetized with urethane (1 g/kg, i.p.) and blood was collected from the retroorbital plexus; the serum was separated and used for the determination of diagnostic marker enzymes like alanine aminotransferase (ALT), aspartate aminotransferase (AST), lactate dehydrogenase (LDH), creatinine kinase (CK), total cholesterol (TC), triglycerides (TG), low density lipoproteins (LDL) and high density lipoproteins (HDL). These enzymes were assayed in serum using standard kits supplied by Erba mannheim, India. The heart tissue was excised immediately, washed with chilled isotonic saline, and then tissue homogenates were prepared in ice cold 0.1 mol/L Tris-HCl buffer (pH 7.2), used for the assay of clinical marker enzymes lipid peroxidation, reduced glutathione (GSH), superoxide dismutase (SOD) and catalase (CAT) [ 14] . The hearts were also stored in 10% formalin for histological studies to evaluate the details of myocardial architecture in each group microscopically. Histopathological studies Pieces of heart from each group were fixed immediately in 10% neutral formalin for a period of at least 24 h, dehydrated in graded (50%-100%) alcohol, embedded in paraffin, cut into 4-5 µm thick sections and stained with hematoxylin-eosin [15] . The sections were evaluated for the pathological/rejuvenative changes in the myocardial tissue. GC-MS analysis The GC-MS analysis of hydroalcoholic extract was carried out in a GC-MS model: Thermo GC-trace Ultra ver: 5.0, Thermo MS DSQ II, gas chromatograph fitted with DB 35-MS capillary standard non-polar column (30 Mts, ID: 0.25mm, film: 0.25 µm thickness) or equivalent column. Carrier gas was helium with a flow rate of 1.0 mL/min; column temperature initially was at 70 °C for 2 min, then rose to 3 000 °C at the rate of 8 °C per minute, maintained at 300 °C for 40.52 min; injector temperature was 240 °C, detector temperature 260 °C, volume injected 1 µL with liquid injector of 70% ethanol extract in ethanol (1 g in 5 mL ethanol). The mass spectra operating parameters were as follows: ion source temperature: 250 °C; ionization potential: 70 eV; solvent delay: 3 min; program run time: 31 min; scan range: 30-350 amu; EV voltage: 3 000 V. Finally the structural fragments were identified on the basis of retention time by using Wiley 07, a commercial library software. Statistical analysis Results were expressed as mean依SEM, (n=6). Statistical analyses were performed with One-way analysis of variance (ANOVA) followed by Dunnet's multiple comparison test by using Graph Pad Instat Software. P value less than 0.05 was considered to be statistically significant. * P<0.05, ** P<0.01 and *** P<0.001, when compared with control and toxicant group as applicable. Effect of 70% ethanolic extract of T. stans flower on biochemical markers Pretreatment ISO alone (positive control group) developed significant heart injury as an evidence by a significant elevation in the biochemical markers like ALT, AST, LDH, CK, TC, TG, LDL and depletion of HDL levels were observed when compared with negative control (group I). Oral administration of the test extract exhibited dose dependent significant reduction in the ISO induced increase in the biochemical levels and prevented the fall of HDL levels. Obviously the simvastatin 60 mg/ kg has restored all the biochemical parameter levels significantly to near normal levels. All the results were statistically significant. The results are summarized in Table 1. 3.2. Effect of 70% ethanolic extract of T. stans flowers on tissue GSH, lipid peroxidation SOD and CAT There was a marked depletion of GSH, SOD and CAT levels in ISO treated group. Treatment with 70% ethanolic extract of T. stans flowers prevented fall in GSH, SOD and CAT levels in a dose dependant manner to a near normal level. The test extract was found to be statistically significant at both lower and higher doses in normalizing tissue GSH, SOD and CAT levels. Treatment with 60 mg/kg simvastain, the standard drug, prevented the depletion of GSH, SOD and CAT. The levels of lipid peroxidation were restored to near normal levels by pretreatment with 70% ethanolic extract of T. stans flowers as compared to positive control group in a dose dependent manner. All the results were statistically significant (P<0.05). The results are summarized in Table 2. The plant extract during GC-MS observation confirms the peak of quercetin and catechol like compounds with the range of 296-310 amu and 106 to 130 amu respectively were identified from hydroalcoholic fraction of ethanolic extract of T. stans flowers (Figure 1). Histopathological Studies In Figure 2, section studied from the myocardium shows intact integrity of myocardial cell membrane, myofibrillar structure with striations and continuity with adjacent myofibrils. The interstitial space appeared intact. The vascular spaces amidst these cardiac muscle fibers appeared unremarkable. Figure 2. Normal control. Arrow shows intact integrity of myocardial cell membrane, myofibrillar structure with striations and continuity with adjacent myofibrils. The interstitial space appeared intact. In Figure 3 some of the cardiac muscle fibers show loss of integrity of myocardial cell membrane, myofibrillar structure with loss of striations and loss of continuity with adjacent myofibrils (short-arrow). The interstitial space at few areas appeared increased (long-arrow). The vascular spaces appeared unremarkable amidst these cardiac muscle fibers. Values are the mean依SEM of six rats /treatment. * P<0.05, ** P<0.01, *** P<0.001 as compared to positive control. LPO=Lipid peroxidation. Simvastatin 60 mg/kg (Figure 4) section studied from the myocardium shows integrity of myocardial cell membrane, intact myofibrillar structure with striations and continuity with adjacent myofibrils (short-arrow), and scattered inflammatory infiltration. The 70% ethanolic extract of T. stans flowers (250 mg/kg) ( Figure 5) section studied from some of the cardiac muscle fibers show loss of integrity of myocardial cell membrane, myofibrillar structure with loss of striations and loss of continuity with adjacent myofibrils (long-arrow). The interstitial space at focal areas appeared increas (short-arrow). Whereas in 70% ethanolic extract of T. stans flowers (500 mg/kg) treated group ( Figure 6) intact integrity of myocardial cell membrane, myofibrillar structure with striations and continuity with adjacent myofibrils (longarrow) was shown, the interstitial space appeared intact and scattered inflammatory infiltration (short-arrow) amidst these cardiac muscle fibers. Long-arrow shows focal loss of arrangement of the cardiac muscle fibers, along with scattered inflammatory infiltration (short-arrow) amidst these cardiac muscle fibers. Figure 5. The 70% ethanolic extract of T. stans flowers (250 mg/kg). Long-arrow shows loss of integrity of myocardial cell membrane, myofibrillar structure with loss of striations and loss of continuity with adjacent myofibrils. The interstitial space at focal areas appeared increase (short-arrow). Figure 6. The 70% ethanolic extract of T. stans flowers (500 mg/kg). Long-arrow shows intact integrity of myocardial cell membrane, myofibrillar structure with striations and continuity with adjacent myofibrils. Short-arrow shows the interstitial space appears intact and amidst these. Discussion ISO produces relative infarction or hypoxia due to myocardial hyperactivity and coronary hypotension, and induces myocardial ischemia due to cytosolic Ca 2+ overload ISO, a synthetic β-adrenergic agonist, by its positive inotropic and chronotropic actions, increasing the myocardial oxygen demand that leads to ischemic necrosis of myocardium in rats similar to that seen in human myocardial infarction. A number of pathophysiologic mechanisms have been outlined to explain the ISOinduced myocardial damage, viz. altered membrane permeability, increased turnover of nor-epinephrine and generation of cytotoxic free radicals. In addition, ISO administration reduces blood pressure that triggers reflex tachycardia, thereby increasing myocardial oxygen demand [16,17] . Myocardium possesses high abundance of enzymes like CK, LDH, ALT and AST. These enzymes serve as sensitive index to assess the severity of myocardial infarction [18,19] . ISO induces hyperlipidemia and myocardial infarction characterized by the increased TG, TC, LDL, very LDL and lipid peroxides such as malanoldehyde generated by the auto oxidation of ISO to semiquinone which reacts with oxygen produces superoxide anions and H 2 O 2 . Subsequently, endogenous antioxidants such as SOD and GSH were also depleted [20,21] . The prior administration of extracts showed significant reduction in elevated serum marker enzymes of myocardial infarction. This reduction in the enzyme level confirms that plant extract is responsible for protection of normal structural and architectural integrity of cardiac myocytes. In this study, ISO treated rats show significant elevation in the levels of serum transaminases (ALT, AST, LDL, CK, TG, TC, LDL, serum glucose and decrease in HDL). There was a marked depletion of GSH, SOD and CAT levels in ISO treated group. Treatment with 70% ethanolic extract of T. stans flowers has prevented the declination of GSH, SOD and CAT levels in a dose dependant manner. There was also dose dependent inhibition of in vivo lipid peroxidation by both the doses (250 mg/kg and 500 mg/kg) of 70% alcoholic extract. Plants extract administration showed a protective effect against ISO induced altered biochemical parameters and eliminated the acute fatal complication by protecting the cell membrane damage. In our investigation, it was observed that the flower possess polyphenolic compounds (flavonoids and tannins) and these constituents are reported to have antioxidant and organ protective properties. The data of the present study clearly showed that plant extract modulated most of the biochemical and histopathological parameters to near normal status in ISO treated rats, assuring its cardioprotection role. In GC-MS analysis compounds were identified from hydroalcoholic fraction, which are already used in food, cosmetic and pharmaceutical industries, and some of them are reported to possess antioxidant property viz. 2-methyl-1-pentene, which reacts strongly with oxidizers [22] ; N-methyl-2-pyrrolidine which is used as an antioxidant in cosmetics and veterinary preparations (LD 50 -7 g/kg); propanidionic acid which is a precursor for synthesis of vitamin B1, B6, B12 and also used to prevent free radical mediated resorption of bones in broiler chicks (LD 50 -4 g/kg); 3-ethoxy-2-butanone (Banana oil) which is used as rust preventer, anti-freezing, antioxidant (LD 50 -16.6 g/kg) [23] etc. The results of present study propose that the test extract successfully quenched oxidative stress induced by ISO, Further, some of the fragments found in GC-MS analysis also are reported to possess antioxidant property and contributed to cardioprotection. However, further investigations are required to elucidate its exact mechanism of action and to establish possible basis for its clinical utility. Conflict of interest statement We declare that we have no conflict of interest. Comments Background CVD remain the principle cause of death in both developed and developing countries, accounting for roughly 20% of all annual deaths worldwide. Myocardial infarction is the rapid development of myocardial necrosis caused by critical imbalance between oxygen supply and demand of the myocardium. Research frontiers The present study is an attempt to characterize the 70% ethanolic extract of T. stans flowers by GC-MS analysis and to investigate the cardio-protective effect of 70% ethanolic extract of T. stans flowers against ISO induced myocardial infarction in rats by measuring the biochemical and histopathological studies. Related reports It was reported that ISO has been found to cause severe stress in the myocardium resulting in infarctlike necrosis. It is also well known that ISO generates free radicals leading to lipid peroxidation, which cause irreversible damage to the myocardium. Innovations & breakthroughs The data of the study showed that flowers extract modulated most of the biochemical and histopathological parameters to near normal status in ISO induced myocardial infracted rats, assuring its cardioprotection role. Applications It may be significant to know that T. stans flowers has been scientifically studied in order to use the flowers as drug in the management of ischemic heart diseases. Peer review The present study made an attempt to investigate the protective effect of 70% ethanolic extract of T. stans flowers against experimentally induced myocardial infarction in rats. The aim of the study is interesting and the language of results is quite appropriate and understandable.
2019-03-07T14:19:54.107Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "2eae3eae5f9772306e7c1e5c61f3b0ddfdf5f10a", "oa_license": null, "oa_url": "https://doi.org/10.1016/s2222-1808(14)60474-6", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f24f85b4a4d1141d32d829dfd861f8acb4e5e0e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
41523910
pes2o/s2orc
v3-fos-license
LGBT Africa: A social justice movement emerges in the era of HIV LGBT communities are emerging across Africa in 2012. Many are emerging in the context of the continents severe HIV epidemic. Homophobia is a barrier to social acceptance and to health and other social services, but African communities are showing reliance in addressing stigma and discrimination, and in organizing for rights and social tolerance. Introduction Sexual and gender minorities, which generally include lesbian, gay, bisexual, and transgender (LGBT) persons, are found in every human society, culture, and context. The treatment of these individuals, and of their partners and families, varies enormously across social and political landscapes. Since the beginnings of the modern movement for sexual and gender minority rights, which many historians date to the June 1969 Stonewall uprising in New York City, LGBT rights movements have been enormously challenging for virtually all societies to address, tolerate, and accept. Current LGBT rights movements are very much works in progress in much of the world -with intense counter-movements in many settings responding to emerging communities and their demands with increased repression, efforts to limit rights, and both legal and extra-legal forms of discrimination and persecution. Africa, with its vast political, social, economic, cultural, and religious diversity, is no exception. The social tolerance of sexual and gender minorities ranges from full citizenship rights, including civil marriage rights, in South Africa to criminalization and the death penalty for same-sex relations between consenting adults in Sudan (Itaborahy 2012 While LGBT rights issues include many social concerns in their own right, they have also been prominent areas of effort and contention in the era of HIV. This is true everywhere the pandemic has affected communities and countries, but perhaps most especially so in Africa, the hardest hit region of the world by HIV/AIDS. Gay, bisexual, and other men who have sex with men (MSM) have been disproportionately affected by HIV, with high rates of infection and loss of life, since the beginning of AIDS in 1981 (Baral, Sifakis, Cleghorn & Beyrer 2007). Transgender persons, particularly male-to-female (M-to-F) persons, have also suffered from high burdens of HIV disease (Herbst, Jacobs, Finlayson, McKleroy, Nuemann, Crepaz, et al. 2008). And in Africa, lesbian women in some settings have been targets of sexual violence, which have led to considerable HIV risks for these women as well (Gontek 2007). HIV program efforts (and resource streams) are much larger in scale and in scope than LGBT rights programs, and so the HIV agenda has often dominated the LGBT discourse and been a primary focus of many organizations working with sexual minorities. This has been empowering for some communities, but further stigmatizing in others. And for most of Africa, this has also meant that LGBT communities have emerged, or are emerging, in the context of Africa's HIV epidemics, with all of the social stigmas and fears that have come with HIV, but also in the context of the community mobilization and HIV/AIDS activism and social engagement. Indeed, in several countries, the first real recognition that MSM populations have been present at all has been through HIV prevalence studies among these men (Baral, Trapence, Motimedi, Umar, Iipinge, Dausab, et al. 2009). The challenges of providing effective HIV prevention and treatment programs for sexual minorities have also been at the forefront of government sector challenges in recognizing that these men exist and need services. This has led to blame and to greater stigma in some settings, but greater visibility virtually everywhere. This visibility is unlikely to diminish. And whether African governments or wider communities are ready, willing, or able to address LGBT rights and demands today, there is no question that gay and transgender Africans are emerging, and calling for their rights, and that a continental civil rights struggle is now underway. The human rights framework and LGBT populations The Universal Declaration of Human Rights of 1948 is a universalist document. It is based on the principle that the rights articulated are fundamental to all human beings and that these are derived from shared human dignity. While sexual and gender minority rights are not included in the specific language of the text, the implication of universality was made clear by the drafters. The first article asserts that 'All human beings are born free and equal in dignity and rights' (United Nations 1948). These are rights that all humans share. But of course in 1948, most of Africa was still under European colonial rule, and few of the rulers accorded colonial subjects the same rights as Europeans. And even fewer countries accorded women the same rights as men. The USA, which sent former First Lady Eleanor Roosevelt to the drafting committee, also sent the great Black scholar W.E.B. Du Bois. Du Bois insisted non-discrimination on the basis of race and ethnicity as one of the fundamental human rights. This was accepted and is part of the Declaration, yet the USA in 1948 was at the height of 'Jim Crow' segregation laws and actively discriminated against Black Americans in education, employment, housing, health care, and virtually every other aspect of social life. Such stark contradictions were part of the reality of the post-war period. Nevertheless, the Universal Declaration was then, and is now, an aspirational document. It lays out a vision of a better and more equal future for humanity and so lays out the goal of equal protection under the law for all persons. When it asserted equal rights for women, most women in the world had limited rights and so the struggle for gender equality has become a struggle to realize these rights for women and girls. This is certainly analogous to the rights of sexual and gender minorities -who remain excluded and legally sanctioned in many settings and for whom equal protection under the law remains a legal reality in only one African State, South Africa. The clearest articulation of human rights for LGBT persons is the Yogyakarta Principle (YP) (2006). The YP was developed by an international panel of human rights experts who met in the Indonesian city of Yogakarta. They reviewed the existing international human rights conventions and treaties to identify and clarify the obligations of states to respect, protect, and fulfill the human rights of sexual and gender minorities. An impressive number of rights principles turn out to have implications in Yogyakarta. And at least two countries, India and Nepal, which have removed discriminatory laws against LGBT persons and decriminalized homosexuality, have used the YP in doing so (Beyrer, Sullivan, Sanchez, Dowdy, Altman, Trapence, et al. 2012). In terms of HIV/AIDS, and health more broadly, the YP clearly indicates that there is no precedent or justification in human rights law to discriminate in health care services on the basis of sexual orientation or gender identity. But the hard truth is that discrimination and outright exclusion from health care services do exist and continue to be real barriers to services in Africa (Fay, Baral, Trapence, Motimedi, Umar, Iipinge, et al. 2012;Poteat, Diouf, Drame, Ndaw, Traore, Dhaliwal, et al. 2011). The roots of homophobia Given the relatively small number of LGBT persons in any population and the relatively modest impacts that changes in rights have had on majority populations when LGBT rights have been respected, it is difficult to understand why the counter-movement to LGBT equality has been so intense, so emotionally held, and so vitriolic. While many opponents of equality for LGBT persons have invoked religious sanctions as the core objection to rights, there has by no means been consistent reading of such texts and many of the 'abominations' cited in Old Testament law are considered by many adherents of faiths which use the texts to be of little or no significance in modern life (that eating shellfish is an abomination is just one of many examples). A deeper reality may be the perceived threat to gender norms and to the status of men versus women, the male versus the female, in many societies. While modern human rights movements worldwide, including nearly all of the movements for HIV treatment access, non-discrimination, and the like, have strong bases in gender equality, the reality on the ground for women and girls worldwide is that men still dominate women in many spheres of political, economic, and social life. Many African societies are struggling with traditional norms of valuing maleness over femaleness, while attempting to address gender inequality, improve educational access for girls, and to empower women in economic and political life. And many modern economic trends can empower women over men, in ways that may present real challenges to male norms of control. Traditional male roles in pastoral societies based on livestock as wealth, for example, may be undermined by migration to urban areas, where women may find work more easily than migrant men. The newly educated young may have more opportunities than their elders, reversing power relations by age and further undermining family structures. These dynamics can cause personal, familial, and social tensions as men and women struggle to negotiate new roles and relationships. Men who willingly take on aspects of the feminine, transgendered persons may be the clearest examples and may be seen as further undermining male power. Homophobia, the violence directed against LGBT persons, may thus be understood as a form of misogyny or perhaps a manifestation of it. The terrible crime of the so-called corrective rape, in which women perceived to be lesbian, transgender, or otherwise insufficiently female are sexually assaulted by men, may be among the most extreme examples of this kind of violent response to the threat of male supremacy (Gontek 2007). And recent evidence suggests that male perpetrators of homophobia, and of homophobic bullying and violence, may themselves be more likely to have strongly negative reactions to their own same-sex desires (Adams, Wright & Lohr 1996). Original Article Journal of Social Aspects of HIV/AIDS VOL. 9 NO. 3 SEPTEMBER 2012 Fear, misunderstanding, and prejudice all likely play roles in the emergence of anti-gay rhetoric and acts. And the deep roots of homophobia may indeed be rooted in the tensions of changing gender norms or the residual misogyny of many societies. But we must also acknowledge that these emotions have been manipulated by political and other leaders, often for cynical gain. The late President of Malawi Bingu wa Mutharika repeatedly engaged in anti-homosexual rhetoric when his administration was being accused of corruption and mismanagement (Kasunda 2012). And in the case of Uganda, where extremely discriminatory legislation has been under debate for several years, there is evidence of evangelical Christian leaders from the USA playing active roles in promoting anti-gay legislators and their positions (Gettleman 2010). The use anti-LGBT rhetoric as a political wedge issue has an unfortunately long and disturbing history in the USA and was a political tool used repeatedly by Republican strategist Karl Rove to assist in elections (Rutenberg 2010). As political systems mature, and voters gain in education and in tolerance, these attempts to manipulate hatred and fear of vulnerable sexual minorities will hopefully become a part of the political past. But for now they remain galvanizing tools in too many political systems and debates. And, as a consequence, these controversies have sometimes added to the exclusion of LGBT persons and MSM from HIV services, as has happened recently in Senegal (Poteat, Diouf, Drame, Ndaw, Traore, Dhaliwal, et al. 2011). Ways forward Despite harsh family, social, and political sanction, LGBT persons, groups, and communities are emerging across Africa. They are calling for justice, equality, and access to health care in settings that are safe, protect confidentiality, and respect the dignity of their persons. This is part of a wider global movement for sexual and gender minority rights that has now reached across continents, development levels, and political systems. The history of other movements toward equality, for women, for ethnic and religious minorities, and for the disabled, shows us that this will be a long struggle, with many setbacks, and sadly, much more unnecessary human suffering. But the proud history of resistance and justice also shows us that expanding human rights, not increasing intolerance, will likely prevail. Partnerships with other rights and social justice movements may be key to making headway for LGBT Africans. The HIV community is an obvious alley, though many gay men and other MSM are rightly cautious about being defined solely as an at-risk group for HIV infection. Both The Global Fund to Fight AIDS, TB and Malaria and the US PEPFAR program have included strong support for sexual and gender minority rights, principles of non-discrimination in care, and guidance on best practices for provision of care to MSM and other sexual minorities at risk of HIV infection. Women's organizations and those advocating for gender equality and women's empowerment may also be key allies for LGBT movements. And those voices of social tolerance and religious inclusion that are already speaking out against exclusion will likely be key partners in changing community views toward acceptance.
2018-04-03T05:06:42.473Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "5ba02b5409fc8ff56b98ae4a360d44563ab036d5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1080/17290376.2012.743813", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "08e6db096e289d85238cecaca7253ebf55b9aba3", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
15101422
pes2o/s2orc
v3-fos-license
Theory of a Scanning Tunneling Microscope with a Two-Protrusion Tip We consider a scanning tunneling microscope (STM) such that tunneling occurs through two atomically sharp protrusions on its tip. When the two protrusions are separated by at least several atomic spacings, the differential conductance of this STM depends on the electronic transport in the sample between the protrusions. Furthermore two-protrusion tips commonly occur during STM tip preparation. We explore possible applications to probing dynamical impurity potentials on a metallic surface and local transport in an anisotropic superconductor. Scanning tunneling microscopy (STM) enables the characterization of materials on the atomic scale through measurements of the local density of states (LDOS). Recently, in a series of STM experiments on the Cu (111) surface, the local transport properties of electrons in a Shockley surface state were probed through their influence on the LDOS around an Fe impurity [1]. A similar experiment has been proposed for measuring the transport properties of a high-temperature superconductor [2]. These experiments detect the reflection from the impurity of electrons injected by the STM. The spatial resolution of these experiments is sub-Angstrom. Properties which might be determined from these types of measurements, but could not be probed by an STM measurement on the homogeneous sample, include the angularly-resolved dispersion relations and mean free path, as well as the density of states as a function of energy and momentum. In the Cu experiments the dispersion relation was measured, no evidence for density-of-states anisotropy with angle was found, and the mean free path was too large to detect. A stronger signal would result from an independent method of injecting electrons at a site (by another contact) and detecting them elsewhere with an STM. The possibilities for this have been explored in recent work [3,4] and will be referred to as the two-contact experiment. A related technique, applicable to transport on a longer length scale (µm), uses a laser to create nonequilibrium quasiparticles and a point contact to detect them [5]. The STM is also sensitive to the spectroscopic properties of impurities [6,7]. However an STM averages over the fluctuating part of an impurity potential, such as that of a free orbital moment, making it difficult to observe. Such free moments are of great interest in part because their interaction with conduction electrons may produce a Kondo resonance. We suggest in this Letter an experiment which should provide detailed angular information about fluctuating impurity potentials and probe transport on a homogeneous sample. The apparatus, shown schematically in Figure 1, would consist of a spatially-extended STM tip with two protrusions each ending in a single atom. Although the case of STM tips with tunneling at more than one place has been considered before [8], that work was concerned with tips ending in clusters with substantial tunneling through more than one atom. The images obtained from these STM tips tended to be blurry and less useful than those from single-atom-terminated contacts. Here we propose a tip with two atomically-sharp protrusions, and will demonstrate that new information is obtainable when these protrusions are separated by more than 10Å. In contrast to the difficulties associated with arranging two independent contacts in close proximity, two-protrusion tips are often created by chance during tip preparation, with tip separations up to 1000Å [9]. The interference between the two protrusions influences tunneling conductances at a lower order in the tunneling matrix elements than the two-contact experiment. The two-protrusion experiment, therefore, should be easier to construct and have greater signal than the two-contact experiment. For separations of 10Å−100Å a two-protrusion tip would be useful for probing the angular structure of a free moment. The differential conductivity depends on the angle-resolved amplitude for electrons to scatter from the impurity. A measurement with a single-protrusion STM tip merely measures backscattering. For an impurity state fixed relative to the lattice orientation by the crystal field, backscattering is sufficient to determine the impurity's angular structure; therefore the single-protrusion measurement provides as much information as the two-protrusion one. However, for identifying the angular structure of a free moment, the two-protrusion tip is superior to the single-protrusion tip. A two-contact experiment could in principle measure this angular structure as well [4], but positioning two tips within 100Å of each other would be extremely difficult. On a homogeneous sample the transport quantities of interest would determine the desired separation of protrusions on the STM tip. Measurements of quantities with long length scales (100Å−1000Å) such as mean free paths, transitions from ballistic to diffusive propagation, low-T c superconductors' coherence lengths, charge-density-wave correlation lengths, and angularly anisotropic density-of-states effects [2] would most benefit from the increased signal of the two-protrusion configuration relative to the two-contact configuration. It is also at these distances that the overlapping interference of other impurities on a surface would complicate a measurement performed with a single-protrusion STM around an impurity. However, for electronic quantities with short length scales, such as Fermi wavelengths, the single-protrusion STM would likely perform the best of the three. The tunneling Hamiltonian where ψ(x) is the field annihilation operator for an electron at position x in the STM tip, and φ(r) is the field annihilation operator for an electron at position r in the sample. The electron spin is treated implicitly to simplify the notation. The differential conductance of the STM at T = 0K, where G is the Green function in the sample and g is the Green function in the STM tip. Typically the transfer function T (x, r) is taken to be localized near a point in the sample and the tip: where the integrals of |υ| 2 and |υ ′ | 2 are unity. With υ ′ a highly-localized function the differential conductance is proportional to When υ is also very localized the proportionality constant is is the density of states in the tip at the Fermi energy. The expression dI/dV = e 2 |W | 2 N(0)ImG(r o , r o ; eV )/h is a common starting point for STM theory [10]. We model the effect of a single impurity resonance by the Hamiltonian term where χ is the annihilation operator for an electron in the localized state. For a simple model [11], A(r) ∝ Ψ(r), the (normalized) wavefunction of the impurity state. The Green function is then = drdr ′ G(r 1 , r; eV ) Ψ(r)Ψ * (r ′ ) πN s G(r ′ , r 2 ; eV ), where E I and Γ are the energy and linewidth of the impurity state and N s is the density of states of the sample at E I . We assume there is no other influence on Γ besides hopping to the extended states. For a d-state, Ψ(r) = (2/ξ √ π)e −r/ξ cos(2θ r ), where ξ is the range of the state. For a fixed d-state, θ r is measured relative to a crystallographic axis. Figure 2 shows the single-protrusion differential conductance in the vicinity of a fixed impurity d-state with ξ = 2k −1 V (k V is the wavenumber of the electronic state with energy eV ). It shows clear four-fold symmetry. If the crystal field splitting of the impurity levels is larger than the temperature and Γ, the STM can separately probe each non-degenerate level by adjusting the voltage. The two-protrusion experiment becomes more useful than the single-protrusion experiment when the impurity of interest has a free moment. Then the dI/dV must be averaged over orientations of θ r in Eq. (6) (and Fig. 2). The two-protrusion transfer function, however, contains contributions from both tunneling sites below the protrusions: where for simplicity we consider υ, υ ′ , and W to be independent of i. The STM current for this system is Img(x i , x j ; 0)ImG(r j , r i ; eV ). The i = j terms describe direct tunneling through the two protrusions, but the i = j terms are interference terms between the two protrusions. We approximate Img(x, x ′ ; 0) by N(0) exp(−|x − x ′ |/ℓ I ) where ℓ I is the inelastic mean free path [12]. A favorable tip material would have a large N(0) and a long ℓ I . Figure 3 shows two-protrusion measurements of a d-orbital which is free to move in the plane of the surface. The dI/dV is plotted as a function of distance from the impurity r (same for both protrusions) and angle θ between the two tips. A single-protrusion measurement corresponds to θ = 0. The four-fold structure of the d-orbital is clearly visible in Figure 3, which should be thought of as a compilation of results which must come from many different tips, since for each tip the distance between protrusions is fixed. A measurement with a particular tip, with separation 7.5k −1 V , is shown in Figure 4. The differential conductance through the two protrusions at r 1 = (x, 3.75) and r 2 = (x, −3.75) is compared to the differential conductance of a single protrusion located at r 1 with tunneling matrix element 2W . The geometry is shown in Figure 1. The differential conductances for x = 0 (θ = π) are identical and at large |x| (where θ is small) the two are very similar. The most prominent difference is the absence in the two-protrusion dI/dV of a peak near x = ±3.75. Since θ = π/2 for x = 3.75, the direct i = j terms are almost fully cancelled by the i = j terms in Eq. (7). We emphasize there is no need to rotate the tip assembly to perform this measurement. If the impurity moment is free, for any tip orientation the geometry of Figure 1 can be arranged solely by translation of the tip. The orientation and protrusion separation of the tip can be identified by analyzing the double image from an impurity or a step edge. We now apply Eq. (7) to transport between the two protrusions through a d x 2 −y 2 -gapped superconductor. A d x 2 −y 2 gap has been proposed [13] for high-T c superconductors, including superconductor with a cylindrical Fermi surface [2]. Figure 5 shows the position-dependent differential conductance as a function of x and y for the d x 2 −y 2 gap ∆ k = ∆ max cos(2φ k ). φ k is the angle that the momentum k makes with the crystallographic a-axis. The voltage bias is set well below the gap maximum (eV = 0.1∆ max ) so that the quasiparticles are only able to propagate in the directions where ∆ k has nodes. In a heuristic sense, gap anisotropy produces an angularly-dependent density of states, which can be qualitatively different at different energies. For a d x 2 −y 2 gap, at voltages much less than the gap quasiparticles can only travel in the real-space directions roughly parallel to node momenta, yielding "channels" of conductance [3]. At voltages slightly higher than the gap maximum there are more states for momenta near the gap maximum, so the channels would appear rotated by 45 o . Measurements of gap anisotropy, particularly from angle-resolved photoemission [15], are of great current interest for distinguishing among various theories of high-temperature superconductivity. Tunneling experiments have an energy resolution better than an meV, far superior to angle-resolved photoemission. Again it is not necessary to rotate the tip assembly since regions of the sample with different orientations (separated by grain boundaries) could be measured instead. The signal in this two-protrusion transport experiment is greater than the two-contact or impurity configurations because the interference terms in Eq. (7) are first-order (proportional to |W | 2 ). The two-contact experiment relies on a second-order process, proportional to |W | 4 [3]. The impurity transport experiment [2] relies on a process which is first-order in tunneling, |W | 2 , and impurity-scattering |U| 2 (the U is the potential strength), and thus overall is second order (|W | 2 |U| 2 ). The primary goal of this Letter has been to offer an example of how a two-protrusion STM can explore the characteristics of fluctuating impurity potentials and the local transport properties of a homogeneous sample. A two-protrusion dI/dV indicates the angular symmetry of an impurity state in a similar way to how a differential cross-section of a FIG. 3. Two-protrusion STM differential conductance for an impurity d-state in the same units as Fig. 2. The two tips are assumed to be the same distance from the impurity. The differential conductance is plotted as a function of tip-impurity distance r (units of k −1 V ) and relative angle θ. The two-protrusion STM differential conductance shows clear four-fold angular symmetry. The single-protrusion STM signal corresponds to θ = 0.
2014-10-01T00:00:00.000Z
1995-09-01T00:00:00.000
{ "year": 1995, "sha1": "26e5937038fb544406f0e02566e463c823f0cb77", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9509001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "26e5937038fb544406f0e02566e463c823f0cb77", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
258168856
pes2o/s2orc
v3-fos-license
Operando Monitoring of Oxygen Drop in Catalyst Layers Accompanied by a Water Increase in Running Proton Exchange Membrane Fuel Cells Accurate understanding of internal phenomena and their feedback is intrinsically important for improving the performance and durability of proton exchange membrane fuel cells. The oxygen partial pressure (p(O2)) at 10 μm from the cathode catalyst layer (CL) in the gas diffusion layer was measured by using an optical fiber with an oxygen-sensitive dye applied to the apex, when the current density was abruptly increased. p(O2) decreased with increasing current density at constant air utilization. This decrease in oxygen partial pressure is attributed to the increased amount of water at the CL at the cathode due to the oxygen reduction reaction and electro-osmotic drag, as previously proposed. A shortage of oxygen in the CL for 1 s was also detected by using this p(O2) monitoring. These results are consistent with the previous results obtained by operando time-resolved measurements of the water distribution in the electrolyte membranes of a running fuel cell. INTRODUCTION Proton exchange membrane fuel cells (PEMFCs) are considered promising energy conversion devices due to their cleanliness and high efficiency. The operation of PEMFCs involves a number of chemical and physical phenomena, including electrochemical reactions and the transfer of heat, mass, and electron/ion charge; an accurate understanding of their internal reaction distribution is essentially important for improving the performance and durability of PEMFCs. It has been shown that water management is critical to maximizing the performance and durability of PEMFCs. 1,2 To maintain good proton conductivity and performance, the inlet gas should be humidified to keep the membrane hydrated. On the other hand, excess liquid water can flood the catalyst layer (CL) and gas diffusion layer (GDL), as well as the gas flow channels, leading to a high mass transport resistance. To optimize the mode of operation, the behavior of the water or gas inside the fuel cell under operating conditions needs to be clarified. Hence, operando measurements have been carried out for liquid water in the membrane electrode assembly (MEA) and the gas flow channels of fuel cells by direct optical systems, 3−5 X-ray imaging, 6,7 neutron imaging, 8−10 and vibrational spectroscopy. 11−16 Recently, the oxygen partial pressure (p(O 2 )) using oxygen-sensitive dye is measured during the power generation. 17−22 Higher special and temporal resolutions are being required for those operando measurements to understand the transient behavior in PEMFC especially used for automobile applications. Previously, we reported on the water distribution in a transient state inside a proton exchange membrane (PEM) during a current density jump using time-resolved coherent anti-Stokes Raman scattering spectroscopy. 15,16 When the current density was suddenly increased, the cell voltage recovered in seconds after the initial rapid decrease. We found that the number of water molecules per sulfonic acid, λ, at the cathode surface of the PEM overshot in synchrony with the change in cell voltage ( Figure S1 in the Supporting Information). 15 This sudden increase (overshoot) of water inside the membrane was lowered by the ejection of water to the CL or by the back diffusion of water into the membrane. Furthermore, the results of synchronous changes in λ and cell voltage suggested that the overshooting water discharged into the cathode CL temporarily lowered the amount of oxygen in the CL to be used for the power generation; the distributions of water and oxygen from the nanometer to micrometer scale were expected to have an effect on the transient power generation. In this study, we now monitored p(O 2 ) near the CL surface at the cathode during the current density jump. The decrease in p(O 2 ) near the CL was clearly observed to follow the voltage drop. During this p(O 2 ) change, the oxygen concentration in the CL was lowered as discussed in the previous study. 15 Figure 1a shows a schematic representation of the p(O 2 ) monitoring system reported previously. 21, 22 An optical fiber was inserted into the cathode side of the cell, and a 532 nm diode laser light was irradiated onto the oxygen-sensitive dye film at the fiber apex. The excitation light and the emission at 650 nm were separated by a dichroic mirror, and the emission was detected by a CCD camera with a reflective filter for excitation light placed in front of the camera. In the center of the cathode side of the GDL, a pinhole 90 μm in diameter was created down to the CL. 12,[14][15][16]22 An optical probe was inserted directly into a PEMFC through the hole. A single-mode optical fiber with a clad diameter of 125 μm and a core diameter of 10 μm was immersed in HF solution (pH = 2.9) and etched until the clad diameter became 50 μm. Afterward, the apex of the optical fiber was cut off flat perpendicular to the fiber axis. At the apex of the optical fiber, the oxygen-sensitive dye solution was used to form a dye film with a thickness of 2 μm. The probe depth was controlled by a micrometer. For measuring the distance between the probe apex and the CL surface, a super luminescent diode light with a wavelength of 830 nm was introduced to the probe to obtain an interference light from the surfaces of the CL and the probe apex. The interference spectrum of reflection lights was Fourier-transformed to measure the distance with an accuracy of 1 μm. Cell for Oxygen Monitoring. The structure of a cell with nine straight flow channels for p(O 2 ) monitoring is shown in Figure 1b. The widths of the gas flow channels and the ribs and the depth of the gas flow channels were all 1 mm. On the cathode side of the stainless-steel end plate, a window is formed for the insertion of an optical probe. An acrylic insulator with a hole was inserted between the end plate and the current collector to position the probe. The hole for the optical probe was in the center of the active area under the central gas flow channel. A catalyst-coated membrane (CCM) was prepared by spray coating both sides of the Nafion membrane (NRE211, E. I. du Pont de Nemours & Company, Inc.) with catalyst paste consisting of a Pt catalyst supported on carbon black (46.9 wt % Pt, Tanaka Kikinzoku Kogyo, Japan), pure water, ethanol, and 5 wt % Nafion ionomer (ion exchange capacity = 0.9 meq g −1 , DE521, E. I. du Pont de Nemours & Company, Inc.) with an ionomer/carbon volume ratio of 0.7, using a pulse whirl spray system (Nordson). The CCMs were dried in an oven at 60°C for 12 h. The thickness of the CL was approximately 7 μm. The active area was 20 mm × 20 mm. For the MEA, a CCM and GDLs with microporous layers (MPLs) (SIGRACET 29 BC, SGL Carbon Group Co., Ltd.) were sandwiched. The tightening pressure of the cell was 2.4 kN. p(O 2 ) Measurement. For the power generation, a test bench (As-510-340PE, NF Circuit Design Block Co.) was used. The cathode and anode gases were supplied as parallel flows. The cell temperature was set at 80°C and the relative humidity (RH) at 80%. An optical probe was inserted and placed 10 μm from the CL. To obtain a calibration curve for p(O 2 ), a mixture of N 2 and air gases of different concentrations was supplied to both the anode and cathode at 500 mL min −1 for 10 min. At each gas mixture rate, the emission was measured five times with a CCD camera with an exposure time of 400 ms and averaged. The emission intensity of the oxygensensitive dye degraded linearly with laser irradiation at approximately 0.06% s −1 . To obtain p(O 2 ) using this dye, the degradation factor was complemented after the emission data had been acquired. The calibration curve is shown in Figure 2. Using this calibration curve and the Stern−Volmer equation, p(O 2 ) was determined. 24 We have previously studied the influence of the water vapor pressure on the emission from the dye film and made clear that the water vapor partial pressure had no influence on monitoring p(O 2 ). 17 The Stern−Volmer equation is formulated as in eq 1: where I 0 is the intensity without a quencher, I is the intensity with a quencher, k is the quencher rate coefficient, and τ is the lifetime of the emissive excited without a quencher present. The cell was operated at 80°C and 80% RH. The power generation at 0.1 A cm −2 was continued for 20 min, and the current density was jumped (at t = 0 s in to either 0.3, 0.5, or 0.7 A cm −2 ). The emission from the oxygen-sensitive dye and the cell voltage were recorded every 400 and 50 ms, respectively. The effect of the 90 μm hole created inside the GDL and the insertion of a 50 μm optical fiber on p(O 2 ) at the apex was estimated to be less than 1% by a fluid-flow computation using mixtures of air and water vapor under the operational conditions. 22 During the continuous irradiation of laser light onto the dye film, the emission intensity degraded linearly with laser irradiation at approximately 0.06% s −1 , which was easily corrected. In this method, the error in the experiments was thus estimated to be ±2%. 22 Therefore, the fluctuations observed in the following Figure 3b are not experimental errors. The degree of pressure drop within the cell can impact the oxygen concentration. To understand the potential drops along the gas channels, we previously investigated the pressure drop using a single cell with straight channels by the combination of the visualization of p(O 2 ) and numerical simulations. 23 Under the ambient pressure, the effect of the potential drop along the straight flow channel was negligibly small. To summarize, experimental parameters are listed in Table 1. RESULTS AND DISCUSSION After the current density was kept at 0.1 A cm −2 for 20 min, it was abruptly jumped to 0.3, 0.5, or 0.7 A cm −2 . The moment of the current density jump was defined as t = 0 as in Figure 3. Figure 3a,b shows the time variation of cell voltage and p(O 2 ) from t = −5 to 10 s, respectively. At 0.1 A cm −2 , the cell voltage showed a constant value of approximately 0.745 V, which dropped after the current density jump. In our previous study, clear voltage oscillations were observed using the cell without MPLs at the GDLs. 22 In this study with an MPL, the discharge of water was prompted from the CL, and the oscillations in cell voltage were not observed. From 30 s before the current density jump, the gas flow in the cell was increased according to the set current value after the current density jump; thus, the oxygen utilization at 0.1 A cm −2 was set to 13.3, 8.0, and 5.7% for current density jumps to 0.3, 0.5, and 0.7 A cm −2 , respectively. After the current density jumps, the utilization of oxygen was set at 40% and that of hydrogen at 70% at all current densities. For all current density jumps, transitions in voltage were observed on the order of 10 s. This is in agreement with numerical simulation results previously reported. 25,26 After the current density jump from 0.1 to 0.3 cm −2 , the voltage immediately dropped from 0.745 to 0.598 V and recovered slowly to a steady value of 0.610 V after 5.0 s ( Figure 3). This behavior of the cell voltage upon the current density jump was similar to that reported previously ( Figure S1). 15 After the current density jump, p(O 2 ) decreased to 10.3 kPa over 1 s and slowly kept decreasing to 10.0 kPa. The decrease in p(O 2 ) at 1 s after the current density jump is considered to be due to the increased oxygen utilization as well as water that has increased at the cathode CL and discharged into the gas flow channel. 27 Therefore, interestingly, for seconds after the current density jump, the cell voltage increased whereas p(O 2 ) decreased. This slow decrease in p(O 2 ) near the CL surface may be due to an increase in gas diffusivity at the CL/gas interface due to the gradual decrease of water after the overshoot of the water content in the membrane. The influence of the gas diffusivity of the interface on p(O 2 ) near the CL was previously reported ( Figure S2b). 22 The voltage increased accordingly. Because of flooding, the highest current density obtained was 0.7 A cm −2 . After the current density jump from 0.1 to 0.7 A cm −2 , the voltage immediately dropped to 0.390 V and slowly kept decreasing to 0.361 V probably due to continuing partial flooding. After the current density jump, p(O 2 ) decreased to 9.3 kPa over 1 s and slowly increased to 9.7 kPa in contrast to the decrease in cell voltage. The amount of water in the CL was expected to slowly increase accompanied by the decrease in the gas diffusivity of the CL/gas interface, corresponding to the increase in p(O 2 ) at 10 μm from the CL Figure S2a). 22 The decrease in the cell voltage is also explained by the decrease in the gas diffusivity in the CL. In the jump to 0.5 A cm −2 , the cell voltage and p(O 2 ) behaved between in jumps to 0.3 and to 0.7 A cm −2 . Namely, the cell voltage sharply decreased to 0.489 V, and the value was almost constant for seconds. p(O 2 ) decreased to 9.9 kPa over 1 s and almost constant. Upon the jump to 0.5 A cm −2 , the amount of water in the CL was expected to increase in the CL and was kept nearly constant for seconds. After the current density jump, it took approximately 1 s for p(O 2 ) to be at the lower stages. Upon the current density jump, the consumption of oxygen at the CL simultaneously increased. On the other hand, the diffusion of oxygen from the gas flow channel to the CL was not spontaneous, taking 1 s to be stabilized possibly because of the change in water content in the CL. The values of p(O 2 ) were decreased by 1.6, 2.1, and 2.3 kPa when the current density jumped from 0.1 to 0.3, 0.5, and 0.7 A cm −2 , respectively, at the same oxygen utilization of 40%. Therefore, the different p(O 2 ) decreases at different current density jumps can be considered to originate from the different amounts of water produced by the ORR and those transported from the anode by the electro-osmotic drag. Previously, upon current density jumps, the number of water molecules per sulfonated group, λ, at the cathode surface of the PEM was found to overshoot in synchrony with the cell voltage, which led to the consideration that the increased water at the cathode temporarily shortened oxygen in the cathode CL. 15 The temporary shortage of oxygen in the CL by the increase of water was clearly reflected by the lowering of p(O 2 ) taking 1 s after the current density jump (Figure 3). Interestingly, for the current density jump to 0.3 A cm −2 , the cell voltage recovered after the initial voltage drop and p(O 2 ) decreased, whereas, for the current density jump to 0.7 A cm −2 , the cell voltage decreased progressively after the initial voltage drop and p(O 2 ) increased. The decrease in cell voltage after the current density jump to 0.7 A cm −2 can be explained by the lowered diffusivity of oxygen in the CL due to the water produced by the ORR. These results were obtained only at a single point in the center of an MEA of 20 mm × 20 mm at the distance of 10 μm from the CL and do not imply a water behavior over the entire cell. Further experiments at various locations and under various conditions are needed to fulfill the reaction distributions throughout the cell for higher stability and durability. CONCLUSIONS p(O 2 ) at the location 10 μm from the CL at the current density jump was measured using an optical fiber and an oxygensensitive dye during power generation of polymer electrolyte fuel cells at 80°C and 80% RH. p(O 2 ) decreased with the current density jump. A temporary shortage of oxygen in the CL was observed by the slower change of p(O 2 ) in 1 s at the location 10 μm from the CL. During flooding, p(O 2 ) at 10 μm from the CL increased, contrary to the decrease in cell voltage, probably because the diffusion of oxygen was inhibited by increasing water in the CL. 22 Transient behavior in a PEMFC is reflected by the transient distributions of physical/chemical parameters inside. Further operando measurements coupled with numerical simulations are required.
2023-04-16T15:10:42.331Z
2023-04-14T00:00:00.000
{ "year": 2023, "sha1": "1446aeadcdb574429956062be890e91244ead649", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.3c00461", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5e57fceea0ceaa6d692f0a2c469975316e67a58", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119167430
pes2o/s2orc
v3-fos-license
Points on singular Frobenius nonclassical curves In 1990, Hefez and Voloch proved that the number of $F_q$-rational points on a nonsingular plane $q$-Frobenius nonclassical curve of degree $d$ is $N = d(q-d+2)$. We address these curves in the singular setting. In particular, we prove that $d(q-d + 2)$ is a lower bound on the number of $F_q$-rational points on such curves of degree $d$. Introduction Let p be a prime number and F q be the field with q = p s elements, for some integer s ≥ 1. An irreducible plane curve C, defined over F q , is called q-Frobenius nonclassical if the q-Frobenius map takes each simple point P ∈ C to the tangent line to C at P . In this case, there is an exponent h with p ≤ p h ≤ d so that the intersection multiplicity i(C.T P (C); P ) of C and the tangent line T P (C) at a simple point P ∈ C is at least p h , and actually i(C.T P (C); P ) = p h holds for a general point P ∈ C. For convenience, is called the q-Frobenius order of C. Frobenius nonclassical curves were introduced in the work of Stöhr and Voloch [7], and one reason for highlighting this special class of curves comes from the following result (see [7,Theorem 2.3]). Theorem 1.1 (Stöhr-Voloch). Let C be an irreducible plane curve of degree d and genus g defined over #C(F q ) ≤ ν(2g − 2) + (q + 2)d 2 . (1.2) Note that by F q -rational points on C, we mean the F q -rational points on the nonsingular model of C. Based on Theorem 1.1, Frobenius nonclassicality can be considered as an obstruction to use the nicer upper bound given by inequality (1.2) with ν = 1. That is a clear reason why one should try to understand such curves better. At the same time, investigating Frobenius nonclassical curves is a way of searching for curves with many points. For instance, the Hermitian curve x q+1 + y q+1 = 1, over F q 2 , and the Deligne-Lusztig-Suzuki curve over F q : where q 0 = 2 s , s ≥ 1, and q = 2q 2 0 , which are well known examples of curves with many points, are Frobenius non-classical. With regard to the number of rational points, a somewhat surprising fact was proved by Hefez and Voloch in the case of nonsingular curves (see [3]). Theorem 1.2 (Hefez-Voloch). Let X be a nonsingular q-Frobenius nonclassical plane curve of degree d defined over F q . If X (F q ) denotes the set of F q -rational points on X , then Let us recall that if X is a nonsingular q-Frobenius nonclassical plane curve of degree d, and ν > 2 is its q-Frobenius order defined in (1.1), then (see [5,Theorem 8.77]) Now note that if ν > 3 and d is within the range given by (1.4), then where the number on the right hand side of (1.5) is the bound given by Theorem 1.1 for the case ν = 1. In other words, (1.3) tells us that nonsingular Frobenius nonclassical curves of degree d usually have many rational points in comparison with the Frobenius classical ones of the same degree. In this paper, we show that this statement could be applied more broadly if we were to drop the exclusivity on nonsingularity. More precisely, we prove the following: Theorem 1.3. Let C be a q-Frobenius nonclassical curve of degree d and genus g. If M S q is the number of simple points of C in P G(2, q), then where m P are the multiplicities of the singular points P ∈ Sing(F q ) ⊆ P G(2, q) of C, and is its F q -virtual genus. Moreover, equality holds in (1.6) if and only if all branches of C are linear. Note that the bound (1.6) does not depend on the Frobenius order ν. A very interesting consequence of Thorem 1.3 is the following: and equality holds if and only if C is nonsingular. Preliminaries Let us begin by briefly recalling the notions of classicality and q-Frobenius classicality for plane curves. For a more general discussion, including the notion and properties of branches, we refer to [5] and [4]. Let C ⊂ P 2 be an irreducible algebraic curve of degree d and genus g. The numbers 0 = ǫ 0 < ǫ 1 = 1 < ǫ 2 represent all possible intersection multiplicities of C with lines of P 2 at a generic point of C. Such a sequence is called the order sequence of C, and it can be characterized as the smallest sequence (in lexicographic order) such that det(D ǫi ζ x j ) = 0, where D k ζ denotes the kth Hasse derivative with respect to a separating variable ζ, and x 0 , x 1 , x 2 are the coordinate functions on C ⊂ P 2 . The curve C is called The number ν is called the q-Frobenius order of C, and such a curve is called q-Frobenius classical if ν = 1. Associated to the curve C, there exist two distinguished divisors R and S, which play an important role in estimating the number of F q -rational points of C. When the curve is Frobenius nonclassical, some valuable information can be obtained by comparing the multiplicities v P (R) and v P (S) for the points P ∈ C. In general, computing these multiplicities is tantamount to studying some functions in F q (x, y) given by Wronskian determinants such as det(D ǫi ζ x j ) and (2.1). This idea was first exploited by Hefez and Voloch, in their investigation of the nonsingular case [3]. As noted by Hirschfeld and Korchmáros in [4], this idea can be useful in the singular case as well. Let F q (C) := F q (x, y) be the function field of an irreducible curve C : f (x, y) = 0. Recall that for any given place P of F q (C) and a local parameter t at P, one can associate a (primitive) branch γ in special affine coordinates: The branch γ is called linear if j 1 = 1. If p ∤ j 1 (resp. p | j 1 ) then the branch is called tame (resp. wild). Obviously, linear branches are tame. When the curve C : f (x, y) = 0 is defined over F q , then C(F q ) will denote the set of places of degree one in the function field F q (C). Considering the projective closure F (x, y, z) = 0 of C, we define the following numbers, which are clearly related to #C(F q ): (ii) M q = number of points of F (x, y, z) = 0 in P G(2, q). (iii) B q = number of branches of C centered at a point in P G(2, q). Note that Hereafter, C will denote an irreducible plane curve of degree d and genus g defined over F q . A relevant step to prove our main result is based on the following: (Hirschfeld-Korchmáros). Assume that C has only tame branches. If C is a nonclassical and q-Frobenius nonclassical curve, then and equality holds if and only if every singular branch of C is centered at a point of P G(2, q). The next lemma extends Hirschfeld-Korchmáros' result, and our proof is built on theirs. In particular, all the definitions and notations, explained in detail in [4], will be borrowed. Lemma 2.3. If C is q-Frobenius nonclassical, then there exist at least (q − 1)d − (2g − 2) tame branches centered at a point of P G(2, q). In particular, Moreover, if every branch centered at a point of P G(2, q) is tame, then (2.3) is an equality if and only if all the remaining branches are linear. Proof. We closely follow the notation used in [4]. The q-Frobenius nonclassicality of C gives and then establishes the relation Therefore, for any place P of F q (C), Let γ be the (primitive) branch associated to the place P, represented by with j 1 ≤ s. If γ is tame, i.e., p ∤ j 1 , then it follows (see [4, proof (2.6) Now let us address the wild case, i.e., the case p | j 1 . Note that if D −k, otherwise. (2.7) Therefore, (2.6) and (2.7) can be reduced to , we arrive at the desired lower bound for the number of tame branches centered at a point of P G(2, q). Now let us assume that every branch centered at a point of P G(2, q) is tame. then (2.6) implies that the remaining tame branches are linear. In addition, (2.7) implies that any wild branch can be considered as However, if this is the case, then from we obtain i.e., b ∈ F q . Thus, by hypothesis, such branch must be tame, and then the assertion follows. The converse follows immediately from the fact that linear branches are automatically tame. The result The aim of this section is to prove Theorem 1.3 and some of its relevant corollaries. Proof of Theorem 1.3. Note that from Lemma 2.3 and the definition of B q , we have Since (3.1) gives Now note that equality on the latter case is equivalent to equality on both sides of (3.1). Let us assume 2) and (1.6) imply M S q = M q and g = g * , respectively. The first equality means that all points F q -points of C are smooth, and thus g * = (d − 1)(d − 2)/2. The latter equality, in addition, gives g = (d − 1)(d − 2)/2. Therefore, C is a smooth curve. Conversely, if C is smooth then M q = B q , and Lemma 2.3 gives B q = (q − 1)d − (2g − 2). Since g = (d − 1)(d − 2)/2, the result follows. The following additional consequences are also worth mentioning. Corollary 3.1. Let C be a q-Frobenius nonclassical curve of degree d whose singularities are ordinary. If the singular points have their tangent lines defined over F q , then Proof. Note that all singularities are ordinary and defined over F q . Thus g * = g, and equality in (1.6) holds. That is, On the other hand, since the tangent lines of the singular points are defined over F q , each such point P gives rise to exactly m P F q -rational points of C. Therefore which gives the result. Proof. This follows directly from Corollary 3.2 and Theorem 1.1. The next example ilustrates how the choice of singular q-Frobenius nonclassical curves of degree d, over nonsigular ones of the same degree, can make a significant difference with respect to the number of rational points. Consider the curves C 1 : x 13 = y 13 + z 13 and C 2 : x 13 = y 13 + y 9 z 4 + y 3 z 10 + yz 12 + 2z 13 , over F 27 . They are both 27-Frobenius nonclassical, and only C 1 is smooth. One can check that #C 1 (F 27 ) = 208, whereas #C 2 (F 27 ) = 280, in addition to C 2 being of smaller genus.
2015-11-02T00:09:18.000Z
2015-11-02T00:00:00.000
{ "year": 2017, "sha1": "55b796e8991b2f78521a4a01915d0c7859aa6fd8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1511.00339", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "55b796e8991b2f78521a4a01915d0c7859aa6fd8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
246027485
pes2o/s2orc
v3-fos-license
Psychometric Evaluation of a Fear of COVID-19 Scale in China: Cross-sectional Study Background At the very beginning of the COVID-19 pandemic, information about fear of COVID-19 was very limited in Chinese populations, and there was no standardized and validated scale to measure the fear associated with the pandemic. Objective This cross-sectional study aimed to adapt and validate a fear scale to determine the levels of fear of COVID-19 among the general population in mainland China and Hong Kong. Methods A web-based questionnaire platform was developed for data collection; the study instruments were an adapted version of the 8-item Breast Cancer Fear Scale (“Fear Scale”) and the 4-item Patient Health Questionnaire. The internal construct validity, convergent validity, known group validity, and reliability of the adapted Fear Scale were assessed, and descriptive statistics were used to summarize the participants’ fear levels. Results A total of 2822 study participants aged 18 years or older were included in the analysis. The reliability of the adapted scale was satisfactory, with a Cronbach α coefficient of .93. The item-total correlations corrected for overlap were >0.4, confirming their internal construct validity. Regarding convergent validity, a small-to-moderate correlation between the Fear Scale and the 4-item Patient Health Questionnaire scores was found. Regarding known group validity, we found that the study participants who were recruited from Hong Kong had a higher level of fear than the study participants from mainland China. Older adults had a higher level of fear compared with younger adults. Furthermore, having hypertension, liver disease, heart disease, cancer, anxiety, and insomnia were associated with a higher fear level. The descriptive analysis found that more than 40% of the study participants reported that the thought of COVID-19 scared them. About one-third of the study participants reported that when they thought about COVID-19, they felt nervous, uneasy, and depressed. Conclusions The psychometric properties of the adapted Fear Scale are acceptable to measure the fear of COVID-19 among Chinese people. Our study stresses the need for more psychosocial support and care to help this population cope with their fears during the pandemic. Introduction In December 2019, the novel coronavirus disease 2019 (COVID-19) emerged in Wuhan City, China [1]. The outbreak rapidly evolved into a global pandemic [2], affecting more than 190 countries and regions [3]. The COVID-19 pandemic continues to spread on a global scale. As of December 20, 2021, there have been more than 271 million confirmed cases of COVID-19 worldwide, with more than 5 million deaths [4]. The COVID-19 pandemic has lasted for almost 2 years and is still ongoing. With time, an increasing number of COVID-19 variants was reported globally [5]. COVID-19 is not only life-threatening, but it also leads to psychological distress [6][7][8][9][10]. The concomitant public health measures such as quarantines, social distancing, and lockdowns can also increase psychosocial distress. Since the start of the COVID-19 pandemic, a plethora of research studies have been conducted to examine the psychological status of people during the pandemic. A meta-analysis of 55 peer-reviewed studies found that the prevalence of depression was 16%, the prevalence of anxiety was 15%, the prevalence of insomnia was 24%, and the prevalence of posttraumatic stress disorder was 22% [11]. Another meta-analysis of the prevalence of stress, anxiety, and depression among the general population during the COVID-19 pandemic found that the prevalence of stress was 29.6%, that of anxiety was 32%, and that of depression 34% [12]. Moreover, compared with studies conducted in Europe, those conducted in Asia found a higher prevalence of anxiety (Asia 33% vs Europe 24%) and depression (Asia 35% vs Europe 32%) [12]. These studies suggest that the pandemic substantially jeopardizes the psychological well-being of the general population [11,12]. In addition to anxiety, depression, and stress, fear is also a common psychological response to COVID-19 [13]. In brief, fear is an adaptive emotion that helps defend against potential danger [14]. Fear may occur in response to specific stimuli in the present environment or in anticipation of future or imagined situations that pose a threat to oneself [15]. During the COVID-19 pandemic, people may experience the fear of contracting the infection and a feeling of uncertainty. Fear can be beneficial because it can motivate people to engage in preventive behaviors, such as hand hygiene and mask wearing [16]. However, excessive fear can be maladaptive, leading to psychological distress. For example, fear of COVID-19 may exacerbate preexisting mental health and psychiatric conditions [17]. In extreme situations, fear may lead to suicidal ideation [18]. Excessive fear can cause irrational behaviors, such as panic buying [19]. It is noteworthy that the COVID-19 pandemic has reignited the fear resulting from the 2003 severe acute respiratory syndrome outbreak for many people in mainland China and Hong Kong. This adverse experience was unique to those populations. The fear levels of people in mainland China and Hong Kong may therefore be different from those in populations that did not undergo that adverse experience. Assessing and managing fear is a crucial component of outbreak control and health promotion [20]. In this study, the 8-item Breast Cancer Fear Scale developed by Champion et al [21] was used. We chose this instrument to measure fear levels for several reasons. First, at the very beginning of the pandemic, there was no standardized and validated study instrument specifically developed to measure fear levels related to COVID-19. For example, the Fear of COVID-19 Scale developed by Ahorsu and colleagues [13] was not available when we planned this study. Second, the 8-item Breast Cancer Fear Scale ("Fear Scale") was one of the few instruments available to measure fear among the Hong Kong Chinese population [22]. Furthermore, even though the Fear Scale was originally developed to measure fear related to breast cancer, the question items are generic and comprehensive. The Fear Scale covers common responses to fear such as feeling scared, nervous, upset, depressed, jittery, uneasy, and anxious, as well as having heart palpitations. According to a study in Canada, many participants felt uneasy, distressed, anxious, and nervous due to the COVID-19 pandemic [23]. A study in Slovakia reported an overall increase in negative feeling such as feeling upset, scared, and afraid during the COVID-19 pandemic [24,25]. The items of the Fear Scale should be applicable and appropriate to measure the fear related to COVID-19. This study aimed to adapt and validate the Fear Scale to determine the levels of fear of COVID-19 in mainland China and Hong Kong. With the information on how an individual fears COVID-19, health care providers can design appropriate psychosocial interventions to meet the public's needs. Study Design, Participants, and Sampling An international study was conducted, which aimed to examine the global impact of the COVID-19 pandemic on lifestyle behaviors, fear, depression, and perceived needs of communities [26,27]. The study was conducted in 30 countries across the globe. It is a cross-sectional web-based survey design. Moreover, a web-based questionnaire platform was developed for data collection [28]. For this analysis, only data collected in mainland China and Hong Kong between July 2020 and January 2021 were used. Study eligibility criteria included (1) aged ≥18 years; (2) being able to read and understand Chinese; and (3) having an internet access. To recruit more people with diverse sociodemographic backgrounds, multiple recruitment strategies were used to recruit study participants. The study participants were recruited by survey service providers, social media platforms such as Facebook, WeChat, and Twitter, and snowball sampling, in which the existing study participants helped to recruit additional participants to join this study. To encourage more people to complete the survey, for each completed questionnaire, HK $1 (US $0.13) would be donated to the Red Cross in the respondent's region. The study protocol has been published elsewhere [26]. The study was approved by the institutional review board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (reference UW 20-272). All the procedures involving human participants in this study were conducted in accordance with the ethical standards of the institutional review board and the 1964 Declaration of Helsinki and its later amendments. Electronic informed consent was obtained from each study participant. Outcomes and Instruments The primary outcome of the study was the fear of COVID-19. To measure the fear of COVID-19, we adapted the 8-item Breast Cancer Fear Scale developed by Champion et al [21] for this study. The study instrument was originally developed to measure women's emotional responses to breast cancer. In the scale developed by Champion et al [21], a 5-point Likert scale is used (1=strongly disagree, 2=disagree, 3=neutral, 4=agree, and 5=strongly agree); a higher score indicates a higher level of fear. The total score of the instrument is the sum of each item. In this paper, we changed the words "breast cancer" to "COVID-19" in all of the following 8 items: (1) the thought of COVID-19 scares me; (2) when I think about COVID-19, I feel nervous; (3) when I think about COVID-19, I get upset; (4) when I think about COVID-19, I get depressed; (5) when I think about COVID-19, I get jittery; (6) when I think about COVID-19, my heart beats faster; (7) when I think about COVID-19, I feel uneasy; and (8) when I think about COVID-19, I feel anxious. The face validity of the adapted instrument was evaluated by an expert panel of this study. The 4-item Patient Health Questionnaire (PHQ-4), which measures anxiety and depressive symptoms, was administered to evaluate the convergent validity of the Fear Scale. The PHQ-4 includes the 2-item Generalized Anxiety Disorder scale and the 2-item Patient Health Questionnaire (PHQ-2). A 4-point Likert scale is used (0=not at all; 1=several days; 2=more than half the days; and 3=nearly every day). The summary score of the PHQ-4 ranges from 0 to 12, with a higher score indicating greater anxiety and depressive symptoms. The PHQ-4 was validated in Chinese adults [29]. The study supported its 2-factor model and reliability [29]. Cronbach α coefficient was .87 for the PHQ-4, .80 for the 2-item Generalized Anxiety Disorder, and .80 for the PHQ-2 in this study. A structured questionnaire was used to collect sociodemographic factors such as age, gender, and comorbidities. Data Analysis The internal construct validity, convergent validity, known group validity, and reliability of the Fear Scale were assessed. The internal construct validity was evaluated using the corrected item-total correlation; a correlation coefficient of ≥0.4 indicated adequate internal construct validity. The convergent validity of the Fear Scale was determined by calculating the Pearson correlation coefficient between the total score of the Fear Scale and the total score of the PHQ-4. It was hypothesized that an absolute value Pearson correlation coefficient of at least 0.3 was required [30]. To evaluate the known group validity, independent t tests were used to compare the mean score of the Fear Scale between (1) people recruited from mainland China and people recruited from Hong Kong [31]; (2) people aged 18-59 years and people aged 60 years or older [15]; and (3) male and female participants [32]. A study among Chinese university students reported that students in mainland China had lower fear of instability related to the COVID-19 pandemic when compared with students in Hong Kong [31]. Another study among pregnant women and new mothers reported that compared with the study participants in mainland China, the level of fear related to the COVID-19 pandemic was significantly higher among study participants in Hong Kong [33]. A study in Singapore found that older age was associated with greater fear of COVID-19 [15]. Another study in the Spanish population found that fear was higher among women than among men [32]. Besides, a study in Turkey reported that the COVID-19 fear scores were higher among people with a chronic disease [34]. Therefore, we also compared the mean score of the Fear Scale between people with and without the following chronic diseases, which were highly prevalent in Chinese populations: (1) hypertension; (2) diabetes; (3) liver disease; (4) heart disease; (5) stroke; (6) chronic obstructive pulmonary disease; (7) cancer; (8) depression; (9) anxiety; and (10) insomnia. Cohen d effect sizes were also calculated. The interpretation of the effect sizes was as follows: trivial (<0.2), small (≥0.2 to <0.5), moderate (≥0.5 to <0.8) and large (≥0.8). Finally, descriptive statistics were used to describe the fear levels of the study participants. Furthermore, multiple linear regression analysis was used to explore the known associations between sociodemographic and clinical factors, on the one hand, and the Fear Scale, on the other. Reliability and Validity of the Fear Scale The mean score of the Fear Scale was 23.60 (SD 6.64), and the Cronbach α coefficient was .93. The corrected item-total correlations were >0.7 for all items. Table 2 shows the results of the internal consistency and internal construct validity. The Pearson correlation coefficient between the Fear Scale and PHQ-4 scores was 0.23 (P<.001). Table 3 shows the results of the convergent validity. With respect to the known group comparisons, the results of the independent t tests showed that the study participants who were recruited from Hong Kong had a higher level of fear compared with the study participants from mainland China (Cohen d effect size 0.24). Furthermore, older adults (60 years or above) had a higher level of fear than younger adults (Cohen d effect size 0.39). Study participants with cancer (Cohen d effect size 0.58), heart disease (Cohen d effect size 0.44), hypertension (Cohen d effect size 0.36), liver disease (Cohen d effect size 0.33), insomnia (Cohen d effect size 0.33), and anxiety (Cohen d effect size 0.28) had a higher level of fear than those without such conditions. Table 4 and Table 5 show the results of the known group comparisons by independent t test. The results of multiple linear regression are shown in the Multimedia Appendix 1. The Fear Levels of the Study Participants In total, 47.1% (n=1330) of the participants reported that the thought of COVID-19 scared them. Moreover, 36.6% (n=1032) of the study participants reported that they felt nervous when they thought about COVID-19. About one-third of the participants reported that they felt uneasy (1003, 35.5%) and became depressed (923, 32.7%) when they thought about COVID-19. The descriptive statistics of the Fear Scale are shown in Table 2. We also separated the analysis between data collected in Hong Kong and those collected in China. Those results are shown in Multimedia Appendix 1. Principal Results In the first part of this study, we assessed the psychometric properties in terms of internal construct validity, convergent validity, known group validity, and reliability of the Fear Scale. The Cronbach α coefficient was .93, which is far larger than the recommended cut-off value of .7. This finding supports the general agreement between the 8 items that make up the composite score of the scale to measure the fear related to COVID-19. Moreover, we found that all the coefficients of the item-total correlation, corrected for overlaps, were larger than 0.4, supporting the internal construct validity of the modified scale. These results supported the suggestion that all individual items measured the same construct as that measured by the other items. With regards to the study's convergent validity, we found a small-to-moderate correlation between the total score of the Fear Scale and the total score of the PHQ-4. Another important finding of this study was that participants with a chronic disease had a higher fear level than those without a chronic disease. Particularly, we found that hypertension, liver disease, heart disease, cancer, anxiety, and insomnia were associated with a higher fear level. Limitations There were some limitations in this study. First, the study was conducted in mainland China and Hong Kong. Therefore, the study findings may not be transferable to other geographic areas in which the severity of COVID-19, case fatality rate, and infection control measures are different. We expect that the fear level would be even higher in areas where the severity and case fatality rate of COVID-19 were more severe. Second, we could not explore the trajectory of the fear levels over time due to the cross-sectional nature of the study. Third, we adapted the Breast Cancer Fear Scale in this study; thus, some of the constructs related to COVID-19 could not be measured. However, as previously mentioned, there was no validated fear scale specific to COVID-19 when we planned our study. Fourth, regarding the reliability of the scale, we only evaluated its internal consistency. We were not able to evaluate the test-retest reliability of the scale due to the cross-sectional design of the study. Fifth, regarding the known group comparison, the sample size of some subgroups was small such as that of patients with diabetes and depression. There might be insufficient statistical power to detect the differences between groups. Finally, we used a web-based questionnaire platform to collect the data. People with low computer literacy would probably be excluded from the study. Accordingly, the potential sampling bias should be noted. Comparisons With Prior Work We found that the total score of the Fear Scale had a higher correlation with the PHQ-4 anxiety subscale than with the PHQ-4 depression subscale. In fact, there are distinct differences in psychological features between fear and depression. According to Witte [35], fear is conceptualized as negatively toned emotion accompanied by a high level of physiologic arousal stimulated by a threat. Fear can be expressed as a physiological arousal, such as feeling "jittery" and "heart beating faster," through verbal self-reports of fear (eg, "I feel scared") and overt acts that exhibit fear, such as facial expression [21]. These emotional and physiological reactions to perceived threats are fundamentally different from those of depression, which is manifested through the following 4 symptom clusters: (1) emotional symptoms such as feeling sad and worthless; (2) cognitive symptoms such as a negative view of the self and hopelessness; (3) motivational symptoms such as a lack of incentive; and (4) somatic symptoms such as a loss of appetite and sleep disturbances [36,37]. Additionally, it was suggested that fear and anxiety are largely distinct emotions. A meta-analysis reported only a moderate (r=0.32) relationship between measures of trait fear and anxiety [38]. Fear is an aversive psychological state during which an individual is motivated to escape a specific and imminent threat. The characteristics of fear include short-lived arousal that quickly dissipates after the threat is avoided. By contrast, anxiety is an aversive psychological state that occurs while an individual approaches an ambiguous and uncertain threat. Hypervigilance and hyperarousal are the typical behaviors characteristic of anxiety [38]. The small-to-moderate correlation between the Fear Scale and the PHQ-4 further supported the need for this study, which adapted and validated the Fear Scale to measure the fear of COVID-19. Besides, compared with study participants recruited in Hong Kong, those recruited from mainland China had a higher PHQ-4 score but lower Fear Scale total score. This finding further suggested that the constructs of fear and anxiety are different. Participants with a chronic disease had a higher fear level than those without one. This finding was consistent with that of a matched case-control study, which found that the prevalence of anxiety symptoms and depressive symptoms and the level of stress were significantly higher among those with preexisting chronic health conditions (59%, 71.6%, and 73.7%, respectively) compared with controls (25.6%, 31.1%, and 43.3%, respectively) [39]. Evidence has suggested that the presence of comorbid chronic conditions would increase the risk of death from COVID-19 [40][41][42]. Moreover, one major concern with the COVID-19 pandemic was its impacts on the routine use of health care services especially for individuals with comorbidities [43]. Service disruptions due to cancellations of elective care and lockdowns hindering access to health care facilities, in addition to the diffidence of patients with a chronic disease in seeking assistance for fear of risking iatrogenic exposure, altogether increased the psychological burden of patients with a chronic disease. Thus, it was not surprising that people with a chronic disease had a higher fear level than those without. In this study, more than 40% of the study participants reported that the thought of COVID-19 scared them. About one-third of the study participants reported that when they thought about COVID-19, they felt nervous, uneasy, and depressed. No doubt, the COVID-19 pandemic was very stressful for people and the communities in general [7]; the fear of infection was very common during the pandemic. Furthermore, people were worried that the health care system could not cope with the COVID-19 pandemic, that there were not enough hospital beds and ventilators to handle the rising number of COVID-19 cases. Another concern weighing on people's minds was the COVID-19 recession. Fear of the COVID-19 pandemic can be overwhelming and cause strong emotions [7]. It was also noteworthy that the COVID-19 pandemic rekindled fears of the 2003 severe acute respiratory syndrome epidemic in mainland China and Hong Kong. Implications First, based on the psychometric evaluation, we found that the adapted scale was a valid and reliable measure to assess the level of fear related to COVID-19. Further studies can use this scale to longitudinally monitor the fear level in different communities. Second, given the high fear levels found in the study sample, it is required to provide psychosocial care for the general public to diminish the psychological burden of the pandemic. Third, the findings call for the need to provide more psychosocial care for chronic disease patients and older adults. Conclusion This study found that the psychometric properties of the Fear Scale were acceptable to evaluate the fear level of the general Chinese population. Our descriptive analysis found that more than 40% of the study participants reported nervousness when they thought about COVID-19. About one-third of the study participants reported that when they thought about COVID-19, they felt nervous, uneasy, and depressed. Additionally, we found that people with a chronic disease reported a higher fear level than those without. The findings call for the need to provide more psychosocial care for chronic disease patients and older adults.
2022-01-19T16:40:45.102Z
2021-07-12T00:00:00.000
{ "year": 2022, "sha1": "c34128a61703e815d501f7cf31fd3c242184c278", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/31992", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "866559f9aa54d0e9585b94364cb58cd88142e428", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237801493
pes2o/s2orc
v3-fos-license
Combined Rg1 and adipose-derived stem cells Alleviate DSS-induced colitis in a mouse model Inammatory bowel diseases (IBDs) including Crohn's disease and ulcerative colitis are chronic inammatory disorders that can affect the entire gastrointestinal tract and the colonic mucosa, no medical or surgical cure for IBD, and all have side effects that limit their use, exhibit a high necessity for new therapeutic strategies. Adipose-derived stem cells (ADSC) therapy represents a promising option for the treatment of IBD. Rg1 Previous study indicated that ginsenoside (Rg1) can ameliorate inammatory disease such as colitis by inhibiting the binding of LPS to TLR4 on macrophages and restoring the Th17/Treg imbalance [1]. In this study, we investigated whether Rg1 can enhance the effect of ADSC on DSS-induced colitis in a mouse model. colitis in a mouse model. Background The two major clinically de ned forms of in ammatory bowel disease (IBD), Crohn's disease (CD) and ulcerative colitis (UC), are chronic remittent or progressive in ammatory conditions that can affect the entire gastrointestinal tract and the colonic mucosa [2]. Recently, increasing studies indicated that the interactions of gut microbiome, mucosal immune system and the manner in which environmental factors modify these relationships appear particularly relevant for the development of IBD [3]. IBD is associated with substantial morbidity, decreased quality of life, and colitis-correlated colorectal cancer (CAC) development, it has increasingly emerged as a public health challenge worldwide [4]. However, the drugs treated IBD are far from optimal, and patients have to face lifelong treatment and debility. So, it is urgent to develop treatment which could reduce side effects and improve long-term effect of IBD. As an emerging therapy for patients with IBD, mesenchymal stem cells (MSCs) have promising future in restoring epithelial barrier integrity, homing to the damaged tissue, inhibiting in ammatory response, and regulating immunity [5][6][7] [8 -10]. Moreover, a lot of studies demonstrated that systemic administration of MSC by the intravenous or intraperitoneal injection could alleviate colitis of mouse model [11][12][13][14]. As a category of MSCs, We and other groups had reported that the ADSC was a feasible and effective treatment for Crohn's stula-in-ano, compared with traditional incision and thread-drawing, ADSC therapy could protect anal function of patients, relieve pain, allow quick recovery, be well-tolerated, and improve the quality of life during perioperative period [15]. Ginsenoside Rg1 is a traditional stem extract and is one of the main active ingredients of ginseng [16,17]. It has been reported that ginsenoside Rg1 can promote stem cell orientation transformation and induce stem cell proliferation [18,19]. For example, Rg1 could enhance the proliferation, differentiation, and soft tissue regeneration of human breast adipose ADSCs in collagen type I sponge scaffolds in vitro and in vivo, and a broad new organizational network was also formed [20]. Moreover, Zhu et al., reported that Rg1 markedly reduces proin ammatory cytokines that released from dendritic cells in a mouse DSSinduced colitis model [1]. To help to develop novel therapeutic procedures for IBD patients, in the current study, we investigate the role of the combination therapy of Rg1 and ADSC administration in a mouse DSS-induced colitis and explore the speci c mechanisms involved in this process. Material And Method Adipose Derived Stem Cell Derivation Human abdomen or buttock adipose tissues were collected with informed consent from patients receiving regenerative medicine using ADSC at Nanjing Hospital of Chinese Medicine A liated to Nanjing University of Chinese Medicine (Nanjing, China). Adipose tissues were collected from healthy adult male patient who provided informed consent at Nanjing Hospital of Chinese Medicine A liated to Nanjing University of Chinese Medicine (Nanjing, China). ADSC were isolated from samples, and stromal vascular fraction were cultured with serum-free culture medium, at 37℃ in an atmosphere containing 5% CO2. After reaching con uence, adherent cells were trypsinized and replated. Cells were passaged using serumfree culture medium three times, and the characteristics of ADSC were veri ed by analysis of the differentiation, proliferation, and immunologic phenotypes. The cells were frozen and delivered to Nanjing Medical University. After thawing, the cells were immediately washed, counted, suspended in phosphatebuffered saline (PBS), For ow cytometry analysis, ADSCs were harvested, washed, and incubated with speci c MSC marker antibodies CD90-FITC, CD44-PE, CD105-FITC, CD73-PE, CD34-PE, and CD45-FITC. Animal Mice six-to eight-week-old C57BL/6 male mice were purchased from experiment center of Nanjing Medical University (Nanjing, China) and housed under controlled temperature, humidity, and light cycle conditions. All animal experiments were conducted in compliance with regulations and approved by the Institutional Animal Care and committee the Nanjing Medical University. Induction of experimental colitis and study design Colitis was induced by providing drinking water containing 3% DSS (molecular weight 36,000-50,000; MP Biomedicals, Santa Ana, CA) for 7 days followed by regular water for 7 days. Mice were divided into ve groups, ADSC and ADSC + Rg1 groups were injected intravenously with 1*106 ADSC [21] on day 4th and 7th, whereas Control, Rg1 and DSS groups were injected with PBS. Moreover, Rg1 and ADSC + Rg1 groups were administered with Rg1 (at a dose of 20 mg/kg [24]) by gavage once daily from the rst day to the 14th day (Start with DSS treatment), whereas others were administered with PBS by gavage once daily. All were sacri ced on the 15th day after the start of the experiment. Body Weight and Assessment of Colon Length To evaluate the therapeutic effects of ADSC and Rg1, the body weight, and colon length were analyzed. Body weight was recorded daily, colon lengths were measured from the anus to the cecum soon after harvesting the colon. Samples were measured as an indirect assessment of in ammation. Histopathological analysis Histological score was calculated as follows. The colon was excised, xed in 10% formalin, embedded in para n wax, and sliced into 4-µm-thick sections. After hematoxylin and eosin (H&E) staining, histological evaluation was performed in a blinded manner according to a previously published scoring system. Histology was scored as described previously [22]. Simply, score 1: mild mucosal in ammatory cell in ltrates with intact epithelium; Score 2: in ammatory cell in ltrates into mucosa and submucosa with undamaged epithelium; Score 3: mucosal in ltrates with focal ulceration; Score 4: in ammatory cell in ltrates in mucosa and submucosa and focal ulceration; Score 5: moderate in ammatory cell in ltration into mucosa and submucosa with extensive ulcerations; Score 6: transmural in ammation and extensive ulceration. Serum Parameter Analysis We collected all the mice' blood serum and tested 6 immune cytokines, including IL-10, IL-6, IL-17A, IL-33, IL-1β, and TNF-α following the company's kit procedures. All of assay kits are purchased from R&D System. All immune cytokines were statistically analyzed by T-test included in GraphPad Prism software 8.0. Flow Cytometry Spleen were removed and mechanically dissociated into single cell suspensions. Splenocytes were incubated with Fc block (CD16/32) for 10 minutes to block non-speci c binding before stained with the conjugated antibodies. For surface staining, cells were incubated with speci c antibodies for 30 minutes on 4℃ followed by washing twice. For intracellular cytokine staining, cells were stimulated with phorbol myristate acetate and ionomycin and in the presence of monensin (eBioscience) for 5 h prior to staining with the BD Fixation/Permeabilization Solution kit. Treg cells were xed and permeabilized using the eBioscience Foxp3/transcription factor staining buffer kit (Invitrogen) according to manufacturer's instruction. Flow cytometry data were acquired on a BD FACSVerse ow cytometer (BD Bioscience) and analyzed using FlowJo version 10 software. DNA isolation and 16S rRNA gene sequencing In total, 100-mg stool samples were used to extract total bacteria genome DNA following the protocol of the DNA extraction kit (#DP328, Tiangen Company, Beijing, China). The concentration and purity of the extracted bacterial DNA were detected using the Qubit 2.0 Fluorometer (Thermo Scienti c, USA). The V3 and V4 regions of 16S rRNA genes were ampli ed using composite speci c primer. The 16S rRNA V3 and V4 speci c primers are 338F (5'-ACTCCTACGGGAGGCAGCAG-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'). The 16S rDNA data were analyzed using QIIME software package 1.9, all the analysis and calculation methods used in-house Perl scripts, with special reference to these two articles [23,24]. Statistical analysis Student's t-test (unpaired, two-tailed) was used to gure out levels of signi cance for comparisons between two groups by using Graphpad Prism 8.0 software. Results are shown as mean ± SD, P value less than 0.05 was considered signi cant. Characterization of Human ADSC ADSC expressed all speci c MSC markers CD73, CD105, CD90, CD44 and lacked expression of the hematopoietic markers CD34 and CD45. Differentiation to adipocytes, chondrocytes, and osteocytes was observed in ADSC (Fig. s1). ADSC and Rg1 administration ameliorated DSS-induced colitis As Fig. 1b shown, the body weights of the mice in the DSS groups were consistently reduced since the fth day, while the mice in the ADSC + Rg1 groups and Rg1 groups mice showed a signi cant improvement in weight loss, although there was an apparent decrease in the ADSC groups (P < 0.05). In the colitis model, disease severity is typically associated with colon length shortening due to intestinal in ammation [25]. The length of colon in the DSS groups were shorter than in the ADSC groups, Rg1 groups, and ADSC + Rg1 groups (p < 0.05). Consistent with previous studies [11], H&E colon analysis (Fig. 1e) showed that DSS-induced colitis resulted in extended ulcerations, destroyed crypts and transmural in ammatory in ltration, with a barely complete mucosal structure. However, in mice treated with ADSC and Rg1, histological damage was ameliorated, as evidenced by a preserved mucosal architecture showing focal erosions and mild/moderate mucosa in ammatory in ltration. To evaluate the intestinal mucosal architecture and in ammatory in ltration, we used the histopathology score system [22] to quantity in ammatory severity degree, and the mice treated with ADSC and Rg1 had a signi cantly lower score than the mice treated with DSS only (P < 0.05) (Fig. 1f). ADSC and Rg1 treatment could alleviate colitis by regulating pro/anti-in ammatory cytokines Pro-in ammatory cytokines play a crucial role in the progression of DSS-induced colitis [26]. To explore whether ADSC and Rg1 treatment could alleviate colitis by regulating in ammation, we detected the cytokine expression in blood serum. As shown in Figure .2a-h, the levels of in ammatory cytokines IL-6, IL-33, TNF-α, IL-1β and IL-17A in Rg1, ADSC, and ADSC + Rg1 groups decreased signi cantly and IL-10 increased compared with those in the DSS groups. Moreover, we found that the combined of ADSC + Rg1 groups can signi cantly inhibit the expression of in ammatory cytokines expression compared to Rg1 and ADSC alone groups (Fig. 2a, b, c, d, e, f, g, h). Therefore, Rg1 can enhance the effect of ADSC on DSS-induced colitis in a mouse model. ADSC and Rg1 regulated Treg/Th17 balance to maintain intestinal homeostasis Previous study has demonstrated that ADSC could inhibit Th17 response in T-cell-mediated autoimmune diseases [27]. It has also been reported that ADSC could inhibit Treg/Th17 differentiation in DSS-induced colitis model mice [15]. So we further compared the number of Th17 cells between groups. We found the number of Th17 cells were much higher in the DSS group than in the Rg1, ADSC, and ADSC + Rg1 groups (Fig. 3b, c, e). But rather, the percentage of Treg cells was remarkably higher in Rg1, ADSC, and ADSC + Rg1 groups compared with DSS groups. In summary, it indicated that Rg1 and ADSC administration selectively upregulated the frequency of Treg cells as well as downregulated the ratio of Th17 cells against DSS-induced colitis, improving the Treg/Th17 balance to maintain intestinal homeostasis. (Fig. 3a, d, f). Besides, the ADSC and Rg1 treated simultaneously showed better trends of recovering Treg/Th17 balance than ADSC and Rg1 groups alone. ADSC and Rg1 treatment signi cantly altered the gut microbiota diversity and composition Gut microbiota is an important factor in regulate intestinal in ammation [28]. Therefore, we further compared the microbiota composition of ve groups. The alpha diversity of bacterial communities was evaluated according to Shannon's diversity index (Fig. 4a). The Shannon's diversity index of ADSC and Rg1 showed no statistical difference with DSS groups (Fig. 4a, P > 0.05). The PCoA scatterplot revealed the clear clustering of gut bacterial communities between ve groups (Fig. 4c). To compare the gut microbiota composition of the ve groups, we then conducted a statistical analysis of gut microbiota at the genus level using the Kruskal-Wallis test. At the genera level, compared to DSS groups, the gut microbiota of ADSC, Rg1, and ADSC + Rg1 groups was characterized by an increased Rikenellaceae RC9 and ruminococcaceae UCG-013 level (Fig. 4d) and a lower ratio of Erysipelatoclostridium and Escherichia-shigella (Fig. 4d). Interestingly, the ratio of Rikenellaceae RC9, Ruminococcaceae UCG-013, Erysipelatoclostridium and Escherichia-shigella lever in ADSC + Rg1 groups were more similar to control groups. In addition, we also found that the effect of ADSC on gut microbiota disturbance induced by DSS was not signi cantly improved, there was even a worsening trend. However, Rg1 can restore the disturbed gut microbiota induced by DSS treated by ADSC. We also identi ed signi cant changes in the multiple biological pathways of ve groups (Fig. 4e). As shown in Fig. 4e, the fteen modules in the ve groups were involved in L-arginine degradation, superpathway of L-arginine, putrescine, and 4-aminobutanoate degradation, superpathway of methylglyoxal degradation, glyoxylate cycle, Citrate cycle (TCA cycle), L-methinonine biosynthesis, and heme biosynthesis, and methanogenesis from acetate. The overrepresentation of the L-arginine degradation pathway in DSS groups may be related to the high abundance of Escherichia-shigella, which has been shown to have a wide capacity for degrading polysaccharides in those samples [29]. In all, the predication analysis indicated that gut dysbiosis may induce a disease-linked state through the interference of physiological metabolic functions. Discussion In In our study, we found that ADSC and Rg1 administrated could substantially ameliorate the colitis compared with ADSC and Rg1 treated alone, as indicated by lower body weight loss, colon length shortening, and better histological scores. Additionally, some of the potential mechanisms were presented, including (i) amelioration of colon mucosal barrier damage; (ii) modulation of the in ammatory response; (iii) reshaping the gut microbiota; and (iv) regulation of Treg/Th17 differentiation. The excessive response of Th17 cells and insu cient function of Treg cells correlate with the onset of IBD. Previous study has demonstrated that ADSC could inhibit Th17 response in T-cell-mediated autoimmune diseases [27] and inhibit Treg/Th17 differentiation DSS-induced colitis model mice [30]. Consistent with previous study, we found ADSC treatment improved the Treg/Th17 balance. Th17 cells produce pro-in ammatory cytokines IL-17A, which contribute to the progression of IBD [31]. Tregs are derived from the thymus with the function of suppressing the innate immune response [32]. Previous study indicated that improving Treg/Th17 balance contributed to the re-establishment of intestinal immune homeostasis in IBD [33]. In this study, we also found the Rg1 can enhance the effect of ADSC of improving the Treg/Th17 balance. We observed that the microbial community diversity and structure in ADSC, Rg1, and ADSC + Rg1 groups were signi cantly changed compared to those of DSS groups. For example, in DSS groups, we found that reduction of Rikenellaceae RC9 and Ruminococcaceae UCG-013 and increase of Erysipelatoclostridium and Escherichia-shigella compared with other groups. Rikenellaceae_RC9_gut_group is a dominant group of Bacteroidetes, which might affect intestinal permeability, oxidative stress and energy metabolism, and might contribute to the pathogenesis of acute myocardial ischemia [34] and in ammation [35]. The Ruminococcaceae family is a member of the Firmicutes phylum and comprises a broad spectrum of species with different functional properties. An underrepresentation of species belonging to the family has previously been reported in IBD [36]. Erysipelatoclostridium and Escherichia-shigella have been reported that play an important role in in ammatory response [37,38]. Besides, our results also showed that ADSC transplantation after combined with the ginsenoside Rg1 administration could signi cantly improve the ratio of Rikenellaceae RC9 and Ruminococcaceae UCG-013 and reduces the level of Erysipelatoclostridium and Escherichia-shigella. which was superior to the ADSC and Rg1 administration alone. The complex interactions between biological pathways and gut microbiota are intense associated with host-microbe. Notably, gut microbiota plays an essential role in IBD through the pathways such as glyoxylate cycle, Citrate cycle (TCA cycle), and L-arginine degradation [39][40][41],consistent with this, the KEGG pathways in the ADSC, Rg1 and ADSC + Rg1group were different from DSS groups, such as glyoxylate cycle, Citrate cycle (TCA cycle), and L-arginine pathway reduced signi cantly, which implicated ADSC and Rg1 may modulate these pathway though restoring composition of gut microbiota. Taken together, the study suggested the combination of ADSC and Rg1 administration may enhance clinical e cacy of IBD through restore the Treg/Th17 balance and gut microecological structure. Conclusion Overall, our results showed that Rg1 and ADSC treatment restore balance of pro/anti-in ammatory cytokines, Treg/Th17 balance, gut microecology. We con rmed that the combination of Rg1 and ADSC administration could alleviate DSS-induced colitis more e cient than that of Rg1 or ADSC treatment alone.
2021-09-28T01:09:46.515Z
2021-07-07T00:00:00.000
{ "year": 2021, "sha1": "ae3b0b18ff56e75d7e8436f4248a1e8922c33909", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-667191/v1.pdf?c=1631900239000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "5514300d3f955476c49d58a35e43aba96af6d49a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
13624604
pes2o/s2orc
v3-fos-license
Analysis of Infrared Signature Variation and Robust Filter-Based Supersonic Target Detection The difficulty of small infrared target detection originates from the variations of infrared signatures. This paper presents the fundamental physics of infrared target variations and reports the results of variation analysis of infrared images acquired using a long wave infrared camera over a 24-hour period for different types of backgrounds. The detection parameters, such as signal-to-clutter ratio were compared according to the recording time, temperature and humidity. Through variation analysis, robust target detection methodologies are derived by controlling thresholds and designing a temporal contrast filter to achieve high detection rate and low false alarm rate. Experimental results validate the robustness of the proposed scheme by applying it to the synthetic and real infrared sequences. Introduction Small infrared target detection is important for a range of military applications, such as infrared search and track (IRST) and active protection system (APS). In particular, an APS is a system designed to protect tanks from guided missile or rocket attack by a physical counterattack, as shown in Figure 1. An antitank missile, such as high explosive antitank (HEAT), should be detected at a distance of at least 1 km and tracked for active protection using RADAR and infrared (IR) cameras. Although RADAR and IR can complement each other, this study focused on the IR sensor-based approach because it can provide a high resolution angle of arrival (AOA) and detect high temperature targets. IR sensors are inherently passive systems and do not have all weather capability. In addition, IR images show severe variations according to background, time, temperature, and humidity, which makes the target detection difficult. The use of adaptive IR sensor management techniques can enhance the target detection performance. On the other hand, few studies have analyzed the IR variations in terms of small target detection using the data collected over a 24-hour period. In 2006, Jacobs summarized the basics of radiation, atmospheric parameters, and infrared signature characterization [1]. He measured the thermal variations in various environments. Recently, the TNO research team characterized small surface targets in coastal environments [2][3][4]. In 2007, the TNO team introduced the measurement environment and examined the target contrast and contrast features of a number of small sea surface vessels [2]. The analysis revealed a variation in contrast due to changes in solar illumination, temperature cooling by wind and sunglint. In 2008, the team analyzed the variations in the vertical profile radiance and clutter in a coastal environment [3]. Based on the analysis, a Maritime Infrared Background Simulator (MIBS) background simulation was performed under these measurement conditions. They can predict clutter in coastal background accurately. In 2009, they extended the IRbased analysis to visible cameras, hyperspectral cameras, and polarization filters to validate the contrast and clutter model [4]. The first contribution of this paper is IR variation analysis in terms of small target detection. The second contribution is the acquisition of 24-hour IR data in winter and spring. The third contribution is variation analysis for different backgrounds. The fourth contribution is the proposition of robust small target detection based on the variation analysis. Section 2 characterize the infrared signature and presents the sources of infrared variations. Section 3 explains the target detection parameters and variation measurement results including the details of the IR variability in different background, time, temperature, and humidity. Section 4 proposes small target detection methods to overcome the target signature variations. Section 5 presents the experimental results for synthetic and real test sequences. Section 6 discusses the analysis results and concludes the paper. Characterization of Infrared Target Signature and Variation 2.1. Physical Modeling of Infrared Target Signature. In general, IR target images are obtained by the process of IR radiation contrast, attenuation by atmospheric transmittance, photon to voltage conversion in the IR sensor, and analog-to-digital conversion (ADC), as shown in Figure 2. The temperature contrast between the target and background is radiated and propagated in the air. The radiation contrast is attenuated by atmospheric effects, such as absorption, emission, and scattering of H 2 O and CO 2 . Objects with temperatures higher than 0 K radiate thermal energy, and such energy differences create voltage differences in thermal detectors. Most target detection algorithms use the difference information between target and background energy. Based on such basic radiation theory, we can predict relative target digital signal levels given the information of thermal radiation intensity difference between the target and background by carefully modeling the energy transformation processes. Modeling procedures are as follows. First, we input a thermal radiant intensity difference between the target and background. Second, we calculate atmospheric transmissivity. Third, we calculate the number of photons in front of the sensors. Fourth, we calculate the voltage levels at a detector. Finally, we obtain gray levels or digital counts after analog-to-digital (A/D) converter. Since the objective of APS is to detect distant targets as early as possible, targets can be regarded as point sources. Therefore, we use the output voltage model in the thermal detector as the following equation [5]: where ( , Δ ) [watt/sr m] denotes radiant intensity at wavelength ( ) when the temperature difference is Δ . [m] represents the distance between the IR camera and a target. These two parameters should be provided by users. Δ pnt means voltage difference produced by target region and background region in a thermal detector. atm ( ) denotes atmospheric transmissivity that is defined as a ratio (the fraction or percent of a particular frequency or wavelength of electromagnetic radiation that passes through a medium without being absorbed or reflected). 0 [m 2 ] represents the area of entrance aperture of the IR camera. ( ) represents the optical transmissivity of the IR camera, which considers the mirrors and lenses. Ideally, the thermal energy of a distant point target should be gathered in a pixel of a detector. However, thermal energy of a point target is dispersed (blurred) due to the diffraction and aberration of the optical system. PVF (point visibility factor) can model such phenomenon quantitatively. So, PVF is defined as the ratio of center pixel energy over total target energy. 1 , 2 represent operation ranges, lower limit, and upper limit of thermal detector. ( ) denotes detector responsivity versus wavelength. If the responsivity is available, then we can estimate the output voltage given infrared intensity. denotes the detector gain. We usually use radiant intensity Δ [watt/ sr] by integrating ( , Δ ) with 1 ∼ 2 . Furthermore, if we use average transmissivity, optics transmissivity, and detector responsivity within wavelength, then (1) is simplified as the following equation: Given radiant intensity (Δ ) of target-background, the total number of photons is calculated by dividing the radiant intensity by energy per photon as in the following equation: where ℎ = 6.626 × 10 −34 [Js] denotes Plank's constant, = 3 × 10 8 [m/s] denotes the speed of light, and center = ( 1 + 2 )/2 means center wavelength. The responsivity and gain are determined by quantum efficiency ( ), electron charge ( [C]), integration time ( [ ]), dark current ( dark ), and equivalent capacitance ( eq ) in the readout integrated circuit. If we use this information, then (2) is rewritten as the following equation: Since the dark current in a cooled thermal detector is so small, it can be removed in the above. In addition, the estimated PVF is reflected by Gaussian filtering in the image domain. So, the final form for point source is simplified as the following equation: The atmospheric transmissivity is calculated using MODerate resolution atmospheric TRANsmission (MODTRAN) and Beer's law [6] according to the target distance. In Beer's law, atmospheric transmittance is defined as atm = − , where denotes attenuation coefficient (km −1 ). If the target distance ( ) is larger than 20 km then Beer's law is used. Otherwise, we use MODTRAN to estimate atmospheric transmissivity. Let us assume that a target of Δ [watt/sr] is at distance [m]. If the projected target size is smaller than 1 pixel, then we use (5) to calculate the difference voltage output. Digital value of difference voltage is obtained as (6) by considering the bit resolution ( [bit]) in the A/D converter and voltage dynamic range (Δ dynamic ): Sources of Signature Variations. The radiation contrast between the target and background can be used to target detection. However, it is challenging problem due to the dynamic behavior of the radiation contrast (Δ ) and atmospheric transmissivity ( atm ). According to Jacobs's analysis [1], IR signature variations are generated by the target conditions, environmental variations, and material properties. The target conditions included exhaust grid/gases, crew compartment heating/cooling, power generator, material properties, camouflage, target location, and orientation. Because the targets (HEAT) in APS are small (length: 1 m, diameter: 0.1 m) and only incoming targets are considered, the variations caused by the target conditions were not considered in the present study. The environmental variations include the induced weather (sun, clouds, rain, and snow), atmospheric effects on transmission, and the geographical location. In this study, IR variations caused by 24-hour weather in winter and spring were considered. The material properties can be another source of IR variations. Different materials in different backgrounds exhibit different IR signatures. For example, there are trees in remote mountains, concrete buildings in urban areas, soil and grass in near fields, and air/cloud in the sky. The material properties are related to the radiation contrast between target and background. Environmental weather conditions are related to the atmospheric transmissivity. Basic Parameters of Small Target Detection. In the infrared small target detection community, background subtraction-based approaches are well established and embedded in military systems. In 2011, Kim proposed a modified mean subtraction filter (M-MSF) and a hysteresis threshold-based target detection method [7]. As shown in Figure 3, an input image ( ( , )) is pre filtered using Gaussian coefficients ( 3×3 ( , )) to enhance the target signal and reduce the level of thermal noise according to the following equation: The signal-to-clutter ratio (SCR) is defined as (max target signal − background intensity)/(standard deviation of background). Simultaneously, the background image ( ( , )) is estimated using a 11 × 11 moving average kernel (MA 11×11 ( , )) as the following equation: The pre-filtered image is subtracted from the background image, which produces a target-background contrast image ( contrast ( , )), as expressed in (9). The modified mean subtraction filter (M-MSF) can upgrade the conventional mean subtraction filter (MSF) in terms of false detection: The last step of small target detection is how to determine which pixels correspond to the target pixels. Kim proposed an adaptive hysteresis thresholding method, as shown in Figure 3. A contrast threshold ( 1 ) is selected to be as low as possible to locate the candidate target region. The 8-nearest neighbor (8-NN) based clustering method is then used to group the detected pixels. The SCR threshold ( 2 ) is selected properly to meet the detection probability and false alarm rate, as expressed in (10). A probing region is declared as a target if where and represent the average and standard deviation of the background region, respectively. 2 denotes the user defined parameter used in the control detection rate and false alarm rate. A probing region is divided into the target cell, guard cell, and background cell, according to the results of contrast thresholding and clustering, as shown in Figure 4. Therefore, the key parameters of small target detection are the SCR-related terms, such as the average background intensity ( ), target intensity ( ), and standard deviation of the background ( ). In the SCR computation, the target-background contrast parameter ( − ) can be derived from the key parameters, which should be analyzed according to the IR variations. B , ) over a 24-hour period for a range of backgrounds, such as the remote mountain, building, near field, and sky. As shown in Table 1, a LWIR camera, CCD camera, DSLR camera, thermal target, and IR thermometer were used to record the target and environmental information. The FLIR Tau302 can record digital IR data with a 14-bit resolution. The SONY NEX-VG20 can record visible data with HD video resolution. The 5D Mark II was used to record the overall experimental status. The BMH-30 is a thermal target to simulate the plume of an antitank missile. The normal temperature of the target is approximately 450 ∘ C. The DT-8865 can measure the temperature of the targets and backgrounds with the range of −50 to 1000 ∘ C. Acquisition of Infrared Location and Meteorological Data. In APS, the incoming antitank missiles should be detected at a distance of at least 1 km and then tracked for the following hard killing process. Because the focus of this study was to analyze the effects of the day/night changes on the SCR parameters for different backgrounds, a small region was selected within the campus, as shown in Figure 5. The distance between sensors and a target is around 300 m. The sensing point is selected carefully to include a variety of backgrounds, such as remote mountains, buildings, near field, and sky. During the recording, the meteorological measurement data from Korea Meteorological Administration web site (KMA, http://web.kma.go.kr) was also checked, as shown in Table 2 (winter) and Table 3 (spring). The tables consist of the recording time, overall weather, visibility, cloud, temperature, and relative humidity. In winter, the overall weather was clear with a temperature and humidity range of −3.0 ∘ C∼4.8 ∘ C and the humidity range of 14%∼66%. In spring, the overall weather was cloudy with a temperature and humidity range of 5.6 ∘ C∼15.4 ∘ C and the humidity range of 27%∼75%. Based on the measured weather, the atmospheric transmittance can be simulated using the MODTRAN (MODerate resolution atmospheric TRANsmission, http://modtran5.com/) program designed to model the atmospheric propagation of electromagnetic radiation for the 0.2 to 100 m spectral range. The simulation parameters were selected, as shown in Figure 6(a). Figure 6(b) shows the corresponding transmittance according to distance. The transmittance was evaluated from 100 m to 1200 m at 100 m intervals because small targets should be detected and tracked at 1200 m. The transmittance at 1200 m was 0.76 and increases to 0.95 at 100 m. Note that the transmittance was quite high, so the signal attenuation could be negligible due to the atmosphere. Examples of Acquired Images. The recording area was selected to include sky, mountain, building, and field backgrounds, as shown in Figure 7, where the locations of a target and cameras are indicated. Over a 24-hour period, a pair of LWIR and CCD images was recorded at 1-hour intervals. In LWIR, both digital 14-bit data and contrast enhanced image were acquired for variability analysis. Figures 8 and 9 give partial examples of a 24-hour recording in winter and spring, respectively. The scene temperatures are indicated in the LWIR images and the recording times are displayed on the CCD images. Variability Analysis of Infrared Images Analysis Factors. The key parameters in small target detection are the pre-filtered input image ( ), estimated background image ( ), standard deviation map ( ). The contrast data 6 The Scientific World Journal and SCR values can be derived based on these three key parameters. Figure 10 shows the flow of the SCR computation of a test image. The test image was obtained from 14-bit raw data and the bright spot represents a gas heater (Figure 10(a)). SNR enhanced image can be obtained by applying a matched filter to the input image ( Figure 10(b)). The background image can be estimated using a local mean filter with a 11 × 11 moving average kernel (Figure 10(c)). A contrast image can be obtained by subtracting the estimated background image from the pre-filtered input image (Figure 10(d)). A standard deviation map was estimated from the contrast image ( Figure 10(e)). The final SCR map was generated using the contrast result and standard deviation map (Figure 10(e)). The SCR-related parameters should be analyzed on different backgrounds because antitank missiles can exist anywhere. The backgrounds were classified as natural (sky, remote mountain, near field) and artificial background (manmade buildings). Figure 11 shows the corresponding regions indicated by the rectangles. Because this study was interested in the background variation effects on the SCR values, a fixed target signal, such as 7317 obtained by averaging the target intensities, was used. This assumption is reasonable because the target distance is within 1 km and the signal attenuation is negligible. For each pixel position (( , )), the mean background intensity ( ( , )), standard deviation ( ( , )), and SCR value ( CR( , )) were calculated. A representative value for each region was obtained by averaging the corresponding values. Variation Analysis Results. The effects of the recording time, temperature, and humidity on the SCR parameters were analyzed. From the analysis, the optimal small target detection time, temperature, and humidity conditions were obtained for different backgrounds. In addition, the evaluated data was used to control the detection thresholds to achieve a predefined detection rate. In the first inspection, the SCR parameter variations were analyzed according to the recording time. In winter season, the recording time started at 10 a.m. and ended at 9 a.m. on the next day with a 1-hour recording interval. Figure 12(a) shows the average background intensity ( ) variations for the four types of backgrounds over a 24-hour period. The background intensities were relatively high during the day and low during the night. Sky background showed very low intensity and increased when a cloud appeared (09 H). Given a fixed target intensity, the contrast data can be calculated as shown in Figure 12(b). The contrast values showed the lowest value at noon and fluctuated during the night. The sky background showed the highest contrast all the time. During the day (10 H-15 H), the contrast magnitudes were as follows: sky > mountain > building > near field. During the night, the order of the changes was sky > near field > mountain > building. The contrast of the building and target decreased at night because humans use energy to warm rooms. The variations in the clutter level can also be checked using the standard deviation of the background, as shown in Figure 12(c). According to the graph, the clutter level increased during the day and decreased during the night. The standard deviation of the sky background almost showed the lowest values but increases when a cloud appeared (03 H, 08 H). The near field showed strong clutter during the day and evening. The building background showed almost constant clutter during the entire day. The clutter level of the mountain background showed a peak at noon and decreased. The final SCR versus time curve can be obtained from the above parameter variations, as shown in Figure 12(d). The mountain and near field background showed increasing SCR values according to the time and the building background showed an almost constant SCR curve. The sky background represents very high SCR values and decreased abruptly when the cloud appeared (03 H, 08 H). As higher SCR values would ensure a higher detection rate, the best operating time can be predicted for each background from the curve. In spring season, the recording conditions are the same as the winter season except the starting time (we started at 11 a.m.). Figure 13(a) shows the average background intensity ( ) variations for the four types of backgrounds over a 23hour period. The background intensities were relatively high during the day and low during the night except the sky background. It showed very low intensity and increased when a cloud appeared (after 21 h). Given a fixed target intensity, the contrast data can be calculated as shown in Figure 13 Figure 13(c). According to the graph, the clutter level is high during the day and decreased during the night. The order of clutter level during night was building > mountain > near field > sky. The final SCR versus time curve can be obtained from the above parameter variations, as shown in Figure 13( level of cloud. Note that the SCR of building background was almost constant in winter and increased in spring during 23 hours. If we compare both data (winter, spring), we can find interesting results as shown in Figure 14. Figures 14(a) and 14(b) represent the temperature and humidity variations according to time. The temperature increased during day and decreased during night. Conversely, the humidity decreased during day and increased during night. The temperature of spring is higher than that of winter and the humidity of spring fluctuates less than that of winter. Figures 14(c)-14(f) compare SCR values between winter and spring for the mountain region, building region, near field region, and sky region, respectively. The SCR curves of the mountain and near field show similar patterns in winter and spring season. However, the SCR curves of the building region show different characteristics: almost constant in winter and increasing pattern in spring. The SCR curves of sky region show quite random according to the level of cloud. In addition, the SCR parameter variations were analyzed according to the temperature. Because the temperatures were recorded at each recording time, the SCR parameters were reordered according to the temperature. As shown in Figures 14(a) near field increased slightly according to the temperature decrease. Those of the sky background do not reveal such phenomena because it is more affected by cloud. Those of the sky background did not reveal such phenomena because it is affected more by cloud. The SCR values of the building background were relatively unaffected by temperature in winter except spring. From above inspection, small targets can be detected better when they are cold (night), particularly in natural backgrounds, such as mountains and near fields. In the last inspection, the SCR parameter variations were analyzed according to the humidity. Because relative humidity was recorded at each recording time, the SCR parameters could be reordered according to the humidity level. As shown in Figures 14 (b) and 14(c)-14(f), SCR values of mountain and near field increased slightly according to the humidity increase. Those of the sky background did not reveal such phenomena because it is affected more by cloud. The SCR values of the building background were relatively unaffected by the humidity. Table 4 lists the overall evaluations. Mountain Building Near field Sky Signal-to-clutter ratio (SCR) overcoming approaches such as knowledge-based adaptive thresholding and robust detection filtering method. Knowledge-Based Thresholding. SCR values change enormously according to recording time, temperature, humidity, season, background type, and so on. If we have a lot of databases, we can make a knowledge-based target detection system using (11) to handle the signature variation. An adaptive threshold (Th) is determined by a function with parameters of time ( ), temperature ( ), humidity ( ℎ ), season ( ), and background type ( ). In fact, such approach is time consuming and impractical to realize the knowledge database SCR ( , ) > Th ( , , ℎ , , ) . Robust Target Detection Filter. Background subtractionbased small target detection is sensitive to the IR signature variation and generates a lot of false detections by background clutter. However, if we use robust target detection filter, the problem can be mitigated. In the past decades, a variety of approaches have been developed. Among them, the temporal variance filter (TVF) of temporal profile has been used successfully to detect point targets moving at subpixel velocity [8,9]. Slowly moving cloud clutter can be removed by subtracting the connecting line of the stagnation points (CLSP) [10]. Recently, the CLSP method is approximated for real time processing [11]. In supersonic missile detection with high frame rate camera, a detection algorithm should be simple but powerful performance of detection rate and localization accuracy to cover a wide range of target velocity (subpixel to pixel velocity). Because the TVF-based method detects targets based on stripe patterns, it shows high detection performance. However, it has limitations such as the ambiguity of target position and subpixel velocity assumption as shown in Figure 15(a). The ambiguity of target position can be solved by the intersection of TVF and spatial filter, which leads to low detection performance in background clutter as shown in Figure 15(b). We solve these three problems by the hysteresis threshold-based detection after the temporal contrast filter (TCF). The TCF can enhance the signatures of moving target pixels and the hysteresis threshold-based detection can localize targets accurately. TCF-Based Supersonic Target Detection System. The proposed small target detection system consists of TCF part and detection part, as shown in Figure 16. The filtering part conducts the enhancement of target signature by applying the temporal contrast. If ( , , ) denotes the intensity of ( , ) pixel at the current th frame, the TCF at ( , , ) is defined as (12). We assume that the buffer size is and − 1 frames are used to estimate background intensity. The key part of the TCF is the background signature estimation by the minimum filter to maximize the signal-to-noise ratio. Because the contrast is produced by the difference of the current intensity and previous intensity of a pixel, we can remove the ambiguity of target location. The rest of the detection system consists of a hysteresis thresholding method. The first threshold is selected to be as low as possible in order to find the candidate target region. Then the 8-nearest neighbor (8-NN) based clustering method is utilized to group the detected pixels. So, we can divide a considering region into a target region and a background region. The adaptive threshold detection is conducted by using (13) where max denotes the maximal TCF in a target region and BG , BG represent the average and standard deviation of the background region, respectively. If the signalto-clutter ratio (SCR temp ) is larger than a predefined threshold , the current considering region is declared as a detected target. Figure Experimental Results We use the TVF as a baseline filtering method [8]. Targets are detected using the same hysteresis thresholding method. In addition, we compare the TCF with the modified TVF (modTVF) which uses both TVF and spatial filter (mean subtraction filter) to localize targets. We prepared two synthetic image sequences and two real target sequences (F-15, Metis-M) with frame rate of 120 Hz. The synthetic sequences are generated using the physics-based method [12]. A test target of Mach 3 is inserted in real ground clutter image with incoming path (Set 1) and passing-by path (Set 2). The first real sequence consists of four F-15s with dynamic motion in strong cloud clutter (Set 3). The second real sequence contains a real antitank missile (Metis-M) incoming near the IR camera (Set 4, Cedip, LWIR, 120 Hz). We evaluated the proposed method in terms of target detection performance as well as filtering performance. The filtering performance can be measured by the improvement of SCR (ISCR) that is defined as SCRout/SCRin. As shown in Figure 18, the proposed method outperforms the other in terms of ISCR for the test Set 4. Table 5 summarizes the statistical performance comparisons of the proposed TCF, TVF, and modTVF in terms of detection rate and false alarm rate. We use the same temporal threshold ( = 7) and buffer size ( = 5) for fare comparisons. According to the results, the proposed temporal filter produces a much higher number of correct detections and lower localization errors than those of other methods. Figure 19 shows small moving target detection results of cluttered images where the small rectangles represent detection. As indicated by the arrows, the TVF showed inaccurate target localizations due to the stripe patterns and the modTVF often missed true targets in clutter such as cloud edge and ground. Note the superior detection performance of the TCF-based Conclusions An analysis of the effects of environment in IR-based small target detection is very important. In this study, the 24hour IR data was recorded in winter and spring, and the IR variations were analyzed in terms of the small target detection parameters particularly the signal-to-clutter ratio (SCR), which is the first trial in this area. SCR variations were analyzed with regard to the recording time, temperature, and humidity. According to the analysis, the natural backgrounds, such as mountains and near field, behave similarly. The SCR values increased during the recording time (10 H-09 H) in these regions. In addition, the SCR values decreased with increasing temperature and humidity. The SCR values of the sky background were quite high and did not show a specific pattern but were affected strongly by cloud. The SCR values of the man-made background, such as buildings, were almost constant regardless of the recording time, temperature, and humidity except spring. Overall, the best conditions can be determined for optimal small target detection or for predicting the small target detection performance under different weather conditions and backgrounds. In terms of optimal target detection, IR signature variations should be considered to obtain desirable target detection rate and false alarm rate. If the background-related SCR variations are used, the small target detection system can be upgraded by controlling the detection thresholds adaptively depending on the background and weather conditions. However, such approach is impractical because it requires huge number of IR databases according to environmental parameters. On the other hand, we can overcome the IR variation by proposing a robust method. This paper proposed a new simple but powerful supersonic small target detection method by the novel temporal contrast filter. As validated by a set of experiments, it can effectively find and localize true targets with the velocity from subpixel to pixel per frame for various clutter images including cloud and ground clutter. Due to the simplicity of the algorithm with powerful detection capability, the proposed method can be used for real-time military applications for staring infrared cameras.
2017-03-31T13:50:29.126Z
2014-02-11T00:00:00.000
{ "year": 2014, "sha1": "e365f1d0b24713139d7328085d81dd01095b121e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/140930.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae6f78796c18f6ef41e40558117e0eb975de0ceb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
78090270
pes2o/s2orc
v3-fos-license
Visual Data Simulation for Deep Learning in Robot Manipulation Tasks , Introduction Robotic manipulators are used in an industrial automation for decades. Typical use-cases vary from welding to pick-and-place tasks. Nowadays, the cooperative robots share the workspace with humans and therefore traditional approaches, relying on precise predefined positions of items in robots workspace, are not working anymore. The robot needs to sense its working space with different sensors and adapt its actions according to the actual situation. With recent progress in deep learning, there start attempts to solve situations, where the robot needs to grasp an object in a random position with end-to-end neural networks trained from large training datasets. The deep convolutional neural networks (CNNs), especially when working with images, need a huge amount of labeled data to train. Getting data with proper labels from the real world is usually time-consuming, and often a manual task. For example, this end-to-end network approach [10] makes use of RGB-D sensor and more than 50 thousands of grasping trials and needs 700 hours of robot labor. Therefore, there is a need to speed up and automation of data collecting and labeling. One possible way is to use simulated images for the training of the CNN. However, training from synthetic images can lead to overfitting of the network to unrealistic details only present in synthetic images and failing to generalize well on the real images. The use of a simulator as realistic as possible is a way presented in this paper. Related Works Grasping movement is typically planned directly from RGB or RGB-D image of target objects. Analytic approaches register actual data to the database of 3D models of known objects with precomputed grasping points [14,1,6]. A registration often involves many intermittent steps like image segmentation, classification and pose estimation, where each step typically depends on multiple parameters, that are difficult to tune. Very good results with utilizing simulated data of 3D point clouds achieves the approach described in [7] for defining grasping points. It achieves better results than analytic approaches. [8] introduces the extension of previous for suction cups grasping. Alternative approaches are making use of deep learning to estimate the 3D pose of the object directly from intensity image and/or 3D point cloud [5,15]. As there is a need for a large number of training data, a new approach is to train the network on simulated images [13] and to adapt the representation to real data [12]. The work [3] improves the precision of recognition by adding synthetic noise to synthetic training images. Recent research suggests that in some cases it may be sufficient to train on datasets generated using perturbations to the parameters of the simulator [4]. Problem Definition The problem solved in this paper is motivated by the real-world problem of picking of specific metallic parts of a single type from a transportation package and feed these to an automated industrial assembly line. As this task is highly repetitive and the motion performed by the human worker is tedious and onesided, there is a request for automation. Parts are not fully randomly distributed in the package, as they are originally organized in columns, but get scattered during the transport. It is expected by the end-user, that manipulator can pick more than 80 percent of the object from the package. The assembly line needs one part every 60 seconds. Another request is flexibility of the solution, as there are many different types of parts manually feed to automated assembly lines. Therefore, modification of the solution for the new part should be as easy as possible. As the existing solution described in the following section is based on convolutional neural network, it needs a huge amount of training data. Therefore, this paper focuses on the generation of simulated training data and evaluation of the usage of this data in the described solution. Solution Pose estimation of the objects for picking is described in details in diploma thesis [11]. Pose estimation of the object position is divided into three steps: segmentation of the image and detection of regions that contain a single object, raw estimation of the position and accuracy improvement. Segmentation The segmentation of the object is base on the Histogram of Oriented Gradients (HOG) approach [2] with the sliding window. This segmentation method was used because is easy to train and performs well under different conditions, e.g. change of light. The parameters of the HOG detector are: block size 16x16, cell size 8x8, image patch size 64x64. A simple SVM classifier is used for classification if the window contains an object or not. Image patches detected as containing object by HOG are used in later steps of the algorithm. Raw Estimation of Pose The size and the position of the center of the patch segmented in the previous step by HOG are used as a first estimation of the distance and position of the object respectively. This first estimation is not accurate enough for reliable picking the object from the box. Therefore next step is necessary to improve the estimation of the position to the level, where the gripper can reliably pick the object. Moreover, the orientation (normal vector) of the object is necessary to estimate to allow successful picking of the object. Accuracy Improvement using CNN For the further improvement of the object position accuracy, the deep convolution network (CNN) is used. For the needs of the CNN, the previously detected patches are resized to the unified size of 64x64 pixels. As the image patches are resized to unified size, it is not possible for the CNN to directly estimate the position and distance of the object and only multiplicative coefficient of the position in x-y plane and distance from the previous step are trained. The input of the network is an image patch of size 64x64 pixels. It is followed by four convolutional layers with ReLU activation function followed by max pool layers. Usage of max pool layers effectively decreases the number of parameters of the model, because of the sparsity of data. The last layer of the network is a fully-connected layer with 3 neurons, whose output are predicted position coefficients. The network is learned by the back-propagation approach. Gathering of Training Dataset The gathering of training data is a semi-autonomous process. At first, the precise position of the learned part is determined by manually placing the gripper on the part. Then, the gripper with the camera is automatically placed into predefined positions in different distances and angles. As the position of the part in the transportation package, e.g. at the bottom, on the top or near the package wall, influence the appearance of the part, this procedure is repeated with part placed in the different configuration in the transportation package. For each configuration are gathered hundreds of images. These images were then processed with HOG detector and only image patches that contains the part are used in later steps. Also, the relative position between the camera and the part was calculated in this step from the original position of gripper placed on the part and actual position of the gripper with the camera. The training data consist from: (1) truth relative position between the camera and the object and (2) gray-scale image patch. Synthetic Training Dataset To be able to train CNN from synthetic training data, we need to obtain the same data in the same format. The most crucial part is the gray-scale image. Tasks 5 As the technical drawing of the part is available, it was easy to get the 3D model of the part in question. Now, the realistic gray-scale image of the model with proper lighting, shading, and reflection is necessary to simulate. As the most promising approach seems to use ray-tracing software. This software can realistically simulate all the complicated reflections and lighting of 3D models with different materials and textures. Our choice is to use the Persistence of Vision Raytracer (PoV-Ray) [9] (see Figure 2 for example of the result) as it is open-source and authors are familiar with the usage of this software. The real placement of the camera is in the center of the gripper head with circular light around the camera. See Figure 3 depicting gripper head with the camera, sucking cups and circular light. Therefore, it was necessary to simulate the camera with the same field of view and the same light source around the camera, to get the same reflections on the surface of the parts. The object and the camera was placed in the same position as gathered by real manipulator with real part. So the synthetic dataset is as near to real one as possible. Visual Data Simulation for Deep Learning in Robot Manipulation The next task was to find the correct material and texture of the model, that is as near to appearance of the real part as possible. The similarity was evaluated by human eyes and improved in an iterative way to achieve the results depicted in Figure 4. Experiments Description and Evaluation In the experiments, we compare the errors of the estimated position of the parts. We create two training datasets of the same size of 1000 images. The first dataset was collected with the real camera placed on the real manipulator. The second dataset was generated in the PoV-Ray software. Both datasets contain the same items, the images taken from same positions with the same lighting. Also, we create a testing dataset with 200 images. The testing dataset was collected with the real camera on the real manipulator. Two networks were trained in the supervised-learning fashion using the Mean Squared Error. Adam optimizer with learning rate 0.001 was used to find the optimal weights. The training required 5000000 iterations, the dropout rate of 0.5 was used. The first network was trained on the real dataset and the second network was trained on the synthetic dataset. To get the reference performance, the first network trained on real training dataset was run on the real testing dataset (see Figure 5). The achieved errors where used as a reference point for the comparison. The performance of the second network ran on the real testing dataset (see Figure 6) is compared with the first one. The results of the network trained on the synthetic dataset are slightly worse than the original network trained on the real images. The difference between the two networks are less than 10% and that is in the tolerance for the deployment into the real process. The precision of the position determination is in average 7% worse. The precision of the depth determination is in average 3.5% worse. The variance of the position and depth errors are not significantly worse. The time needed for the collection of the real dataset with 1000 images is around 1.5 hour. The synthetic dataset of the same size can be generated on the MetaCentrum Grid Infrastructure in order of minutes. Conclusion and Further Work The performance of the network trained on the synthetic dataset is slightly worse than the network trained on the real dataset, but the difference is in the tolerance, so the network trained on the synthetic dataset can be deployed with the real manipulator. Now used part is quite simple and rotational symmetrical, so we can use a quite a small dataset for training. As we plan to use this system for more complex parts, there will be a need for the much bigger dataset and then the time savings will be more significant. For further improvement, we plan to combine the real and synthetic data together to improve the performance of the network. Also, we plan to replace the manual tuning of the material parameters in the ray-tracing software with automated process of learning the parameters from the performance of the network. As the material parameters significantly influence the light reflections, it is expected, that with the better estimation of material parameters, the simulated images will be more realistic.
2019-03-16T13:05:52.481Z
2018-10-17T00:00:00.000
{ "year": 2019, "sha1": "b2417049cdcd3873d63c858a092c768cc9c5cce6", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/4352739/files/Surak2018MESAS.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "52a2f292b1c7d349bc9c7bd8032f5a8ffbb9e5bd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256426555
pes2o/s2orc
v3-fos-license
A Therapeutic Sheep in Metastatic Wolf’s Clothing: Trojan Horse Approach for Cancer Brain Metastases Treatment DOX-PLGA@CM employs the whole set of membrane molecules of a brain-homing metastatic breast cancer cell optimized through a natural selection process. Thus, the hetero and multivalent effects of these molecules greatly facilitate the nanoparticle crossing the blood-brain barrier. Attributed to the homotypic effect of the nanocarrier, DOX-PLGA@CM shows stronger anticancer efficacy than free DOX for its parenteral cells. DOX-PLGA@CM effectively reaches the metastatic tumor lesions in the brain, and slows down the progression of brain metastatic breast cancer. DOX-PLGA@CM employs the whole set of membrane molecules of a brain-homing metastatic breast cancer cell optimized through a natural selection process. Thus, the hetero and multivalent effects of these molecules greatly facilitate the nanoparticle crossing the blood-brain barrier. Attributed to the homotypic effect of the nanocarrier, DOX-PLGA@CM shows stronger anticancer efficacy than free DOX for its parenteral cells. DOX-PLGA@CM effectively reaches the metastatic tumor lesions in the brain, and slows down the progression of brain metastatic breast cancer. Early-stage brain metastasis of breast cancer (BMBC), due to the existence of an intact blood–brain barrier (BBB), is one of the deadliest neurologic complications. To improve the efficacy of chemotherapy for BMBC, a Trojan horse strategy-based nanocarrier has been developed by integrating the cell membrane of a brain-homing cancer cell and a polymeric drug depot. With the camouflage of a MDA-MB-231/Br cell membrane, doxorubicin-loaded poly (D, L-lactic-co-glycolic acid) nanoparticle (DOX-PLGA@CM) shows enhanced cellular uptake and boosted killing potency for MDA-MB-231/Br cells. Furthermore, DOX-PLGA@CM is equipped with naturally selected molecules for BBB penetration, as evidenced by its boosted capacity in entering the brain of both healthy and early-stage BMBC mouse models. Consequently, DOX-PLGA@CM effectively reaches the metastatic tumor lesions in the brain, slows down cancer progression, reduces tumor burden, and extends the survival time for the BMBC animal. Furthermore, the simplicity and easy scale-up of the design opens a new window for the treatment of BMBC and other brain metastatic cancers. Introduction Brain metastases of breast cancer (BMBC) are one of the most frequent and deadliest neurologic complications [1]. More than one-third of Her2 positive or "triple-negative" (estrogen receptor-negative, progesterone receptor-negative, and Her2-negative) breast cancer patients will progress to brain metastasis, which has a poor prognosis with a median survival time of fewer than 12 months [2,3]. Typically, the unrestrained brain metastasis presents the feature of aggressive infiltration, leading to the destruction and displacement of brain tissue and subsequently cognitive impairment [4]. Multifocal lesion distribution through the whole brain is another significant feature of brain metastasis at the time of diagnosis, as evidenced by the localization of the lesions in the cerebral hemispheres, cerebellum, and brainstem are 80%, 15%, and 5%, respectively [5]. Currently, BMBC treatment mainly relies on surgery and radiation, including stereotactic radiosurgery (SRS) and wholebrain radiation therapy (WBRT) [6]. Surgery is only limited to some well-defined and non-invasive metastasis lesions in favorable locations. Radiation therapy, especially WBRT, is challenged by significant side effects such as cognitive impairment [7,8], while the improved overall survival time is limited and heterogeneous [9]. However, chemotherapy, a widely adopted therapeutic approach for many cancers, is excluded from the standard care for BMBC due to its inability of transporting to brain metastases to reach adequate therapeutic concertation in the presence of blood-brain barrier (BBB) and/or blood-tumor barrier (BTB) [1]. At the early stage, BMBC displays a co-opted proliferation and growth pattern along the BBB basement membrane, and the BBB integrity is relatively well maintained. Along with cancer progression, neovascularization would sprout out from existing metastatic lesions with the physiologically compromised structure of BBB, termed as BTB [10]. Although the emergence of BTB at the advanced stage of BMBC allows for some extravasation of larger molecules, including nanoparticles, it is still not sufficient for the accumulation of drugs and nanoparticles to a therapeutically effective concentration [11,12]. Moreover, the permeability of BTB is of significant heterogeneity in different metastatic lesions, even within a lesion [12]. Unfortunately, the progression from BBB to BTB indicates the further deterioration and poor prognosis of BMBC. Therefore, there is an urgent need to develop a reliable and practical approach to treat BMBC at its early stage when the BTB has not yet emerged. To help drugs cross the BBB, many nanoparticle-based delivery systems have been developed due to their improved circulation time in the blood and BBB-oriented functionalization potential. For instance, by conjugating ligands for the receptors, such as lactoferrin [13] and transferrin [14], express on the surface of brain endothelial cells on the nanoparticles, an improved cargoes delivery to the brain could be realized by receptor-mediated transcytosis [15]. Nonetheless, due to the competitive binding with the abundant endogenous ligands [16], reduced targeting property induced by the rapid formation of plasma protein corona on the nanoparticle surface in circulation system [17], and difficulty in bottom-up targeting ligand conjugation [18], the accumulation of nanoparticles in the brain via receptor-mediated transcytosis is still too low to elicit an effective therapeutic responsive [19]. Recently, emerging biomimetic nanotechnology realized through cell membrane camouflage has attracted tremendous attention [18,20]. In this notion, a synthetic nanoparticle is cloaked with a natural cell membrane to yield a core/shell structure and bestow the nanoparticle with naturally evolved properties of the source cells, such as immune escaping capacity and prolonged blood circulation time [18]. Encouraged by these advantages, researchers have adopted this strategy for the treatment of brain-related diseases, including glioblastoma multiforme [21,22], ischemic stock [23,24], and Parkinson's disease [25]. Still, very few explored that in cancer brain metastases. During the cascade of brain metastases formation [26], disseminated cancer cells from the primary tumor site arrive in brain vasculature as the "seeds." Subsequently, they traverse across the BBB into the brain parenchyma. It is believed that the interactions between the membrane molecules and receptors of the cancer cells and the endothelial cell are critical for the attachment of cancer cells to brain endothelial cells and subsequent trans-BBB migration [1,26]. Since the above cell-cell interaction is multivalent and involves many substances [26], the BBB penetrating efficacy of brain metastatic cells is superior to most developed brain targeted systems. Inspired by this process, we developed a Trojan horse strategy by integrating a polymeric nanoparticle and the cell membrane from a brain homing breast cancer cell, which was generated after two rounds of intracardiac injection and resection from the brain. During this process, the expressed membrane molecules of the cancer cells have been optimized in vivo, which would endow the nanocarrier with the capability of crossing the BBB, homing to brain metastasis lesion, and realizing effective chemotherapy for BMBC (Scheme 1). The biomimetic nanocarrier has a core-shell nanostructure, where doxorubicin (DOX) loaded polymeric nanoparticle prepared from poly (D, L-lactic-co-glycolic acid) (PLGA) constitutes the core, and cell membrane (CM) derived from brain homing MDA-MB-231 breast cancer cell (MDA-MB-231/Br) conceals the core to yield a DOX-PLGA@ CM. To best recapitulate the condition of the early-stage of BMBC, a brain metastasis cancer model constructed through systemic inoculation (intracardiac injection) of MDA-MB-231/Br cells, which maintains the integrity of BBB contrasting to the widely adopted local intracranial injection [27], was employed for the biodistribution and in vivo therapeutic efficacy study. Fabrication of DOX-PLGA The emulsion solvent evaporation method was used to prepare DOX-loaded PLGA nanoparticles (DOX-PLGA). In brief, 100 mg of PLGA and 5 mg of DOX was dissolved in 2 mL of chloroform, followed by the addition of 50 µL of triethylamine and sonicating for 1 min using a sonicator (ULTRASONIC PROCESSOR XL, Misonix, NY, USA) at 70% pulse duty cycle on ice. The solution was dropwise added into 10 mL of 5% PVA solution under slight vortex and further sonicated under the same output and frequency for 15 min on ice. The emulsion solution was slowly poured into 20 mL of 0.5% PVA solution under stirring and continued to stir for 12 h under atmospheric pressure at room temperature overnight until the total evaporation of the organic solvent. The DOX-PLGA were harvested by centrifugation (13,500 g for 15 min), washed twice with PBS solution, and stored at 4 °C for further use. Cell Culture Human breast cancer cells MDA-MB-231 were purchased from ATCC, and its subtype of MDA-MB-231/ Br cells, a brain-homing derivative of a human breast adenocarcinoma line MDA-MB-231, were purchased from Memorial Sloan Kettering Cancer Center. The cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM) containing 10% of fetal bovine serum (FBS, Gibco), 100 U mL −1 of penicillin and 100 mg mL −1 of streptomycin under a humidified atmosphere of 5% CO 2 at 37 °C. The culture medium was replaced with a fresh one every two days. Fabrication of DOX-PLGA@CM Firstly, the cell membrane vesicles (CM) of MDA-MB-231/Br were prepared as our previous method [28]. MDA-MB-231/Br cells were a kind gift from Dr. Joan Massagué at the Memorial Sloan Kettering Cancer Center. Luciferases stably expressing cells were established by lentivirus transfection of Luc vector. Briefly, MDA-MB-231/Br cells were harvested with 2 mM EDTA PBS solution and resuspended in hypotonic lysis buffer (20 mM Tris-HCl pH7.4, 10 mM KCl, 2 mM MgCl 2 , and 1 mM EDTA-free mini protease inhibitor tablet per 10 ml), followed by homogenization with a Dounce homogenizer for 20 times. The homogenate was centrifuged at 10,000 g for 10 min, and the supernatant was collected and further ultracentrifuged at 100,000 g for 1 h. The cells pellet was collected and washed once with 10 mM Tris-HCl (pH = 7.5). Finally, the cell ghosts were re-suspended in water, sonicated for 30 s in a water bath sonicator, followed by physical extrusion through a 400 nm polycarbonate membrane for 5 circles to obtain CM vesicles. The protein concentration of the CM vesicles was quantified by BCA protein assay. The CM vesicles were stored at 4 °C until further use. DOX-PLGA and CM vesicles were sufficiently mixed at 1:1 weight ratio of PLGA to protein and further extruded through a 200 nm polycarbonate membrane for 7 circles to fabricate CM coated DOX-PLGA (DOX-PLGA@CM). Characterization of the Nanoparticles The morphologies of DOX-PLGA and DOX-PLGA@ CM were characterized by transmission electron microscope (Hitachi HT7800 TEM, Hitachi High Technologies, Tokyo, Japan), and their hydrodynamic sizes and zeta potentials were measured by Nano ZS Zetasizer (Malvern Instruments, UK). DOX loading content (LC) and loading efficiency (LE) were measured by using UV-Vis spectroscopy at the wavelength of 480 nm or fluorescence spectroscopy at excitation/emission wavelength of 480/570 nm with free DOX as a standard after their liberating from nanoparticles by DMSO according to the following equations, LC = (amount of loaded DOX)/(amount of loaded DOX + amount of PLGA) × 100 and LE = (amount of loaded DOX)/(amount of total DOX) × 100. The cell membrane proteins coated on the surface of nanoparticles were confirmed and characterized by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and stained with Coomassie brilliant blue (Invitrogen, Oregon, USA) method. To characterize the drug release profiles of DOX, DOX-PLGA@CM dispersed in pH 7.4 or pH 5.0 PBS were placed in a dialysis bag with 8000 MWCO and immersed in their corresponding PBS buffers. At given time intervals, 5 mL dialysates were gathered to quantify DOX release, and the same volume of fresh buffer was replenished. Cellular Uptake of the Nanoparticles in Cancer Cells To The signals of three samples for each treatment were collected till reaching 20,000 events. Forward and side-scatter were "gated" to exclude dirt and clumped cells. Identical laser setting and gating were used for the analyses of all samples. In Vitro Cytotoxicity Assay The cytotoxicity of different nanoparticles to NIH3T3, MDA-MB-231 and MDA-MB-231/Br cells was investigated by MTT assay. In brief, cells were seeded in 96-wells plate at a density of 8,000 cells per well and cultured for 24 h. Thereafter, the medium was replaced with fresh one containing different drugs in a serial of concentrations and incubated for another 48 h. After that, 10 μL of MTT solution (5 mg mL −1 in PBS) was added to each well and incubated for another 4 h. Then, the medium was discarded and replaced with 100 µL DMSO. The optical density (OD) of each well was measured at 570 nm, and untreated cells were used as controls. The cell viability was calculated as the following equation: OD A /OD B × 100%, where OD A is the OD value of experimental group cells, and the OD B is the OD value of control cells. Cellular Uptake of Nanoparticles in hCMEC/D3 Cells Nile red was loaded into PLGA nanoparticles (Nile-PLGA) to track nanoparticles intracellular distribution. Nile-PLGA was fabricated as DOX-PLGA, excepting the substitution of DOX with Nile red. hCMEC/D3 were cultured in 35 mm glass-bottom dishes at a density 2 × 10 4 cells/well. After 24 h of culture, the cells were treated with Nile-PLGA and Nile-PLGA@CM at a Nile red concentration of 0.1 µg mL −1 and incubated for 2, 4, and 6 h. Then, the cells were washed three times with cold PBS, and fixed with 4% paraformaldehyde for 10 min. Hoechst 33,254 was used to stain the nuclei of the cells. The uptake of nanoparticles in hCMEC/ D3 was characterized by the Carl Zeiss LSM700 confocal microscope. In Vitro BBB Penetration Assay The BBB model was constructed following a reported method [29]. In brief, 20,000 hCMEC/D3 cells were seed on a polycarbonate 24-well Transwell membrane with 8 μm mean pore size to form a cell monolayer. The transendothelial electrical resistance (TEER) of the hCMEC/D3 cell monolayer was measured every day by an epithelial voltohmmeter (Millicell-RES, Millipore, USA). Until the TEER was above 200Ω cm 2 , the established BBB model was used to estimate the penetrating ability and efficiency of the nanoparticles. Culture media (200 µL) containing Nile-PLGA or Nile-PLGA@CM were added into the upper chamber. The low chamber was filled with 600 µL of the plain medium. At given time intervals, the medium in the lower chamber was collected to quantify the penetrating efficiency of nanoparticles by fluorometry and replaced with a fresh medium. The penetrating efficiency of the Transwell without cell monolayer was used as a positive control [30]. Breast Cancer Brain Metastases Model All animal experiments were carried out following the protocol approved by the Institutional Animal Care and Use Committee (IACUC) of the University of South Carolina. Female BALB/c nude mice (5-6 weeks old) and C57 BL/6 J (5-6 weeks old) were purchased from Jackson laboratory. For tracking the tumor growth, MDA-MB-231/Br cells were transduced with pLentipuro3/TO/V5-GW/EGFP-Firefly Luciferase with the help of a lentivirus to yield a luciferase stably expressing cell line (MDA-MB-231/Br-Luc). Breast cancer brain metastases model was established following the method described in the literature [30,31]. In brief, the BALB/c nude mice were anesthetized by 2% isoflurane, and 200,000 MDA-MB-231/Br-Luc cells in 100 µL DMEM medium were intracardiac injected into the left ventricle with a 26 G hypodermic needle. The behaviors of mice were observed every other day. The tumor growth was monitored by bioluminescence imaging using IVIS Lumina III whole-body imaging system (PerkinElmer Inc., Waltham, USA) twice per week. In Vivo Distribution of Nanoparticles For tracking nanoparticles in vivo distribution in mice, DIR was loaded in nanoparticles as a fluorescence probe. DIR-PLGA and DIR-PLGA@CM were injected intravenously to normal mice (C57 BL/6 J) or breast cancer brain metastatic BALB/c mice at a DIR dose of 0.5 mg kg −1 . Three hours post-injection, the mice were anesthetized and imaged using an IVIS Lumina III imaging system (excitation: 750 nm; emission: 770-790 nm). After that, the mice were sacrificed, and the major organs, including brain, heart, liver, spleen, lung, and kidney were collected for ex vivo imaging to investigate nanoparticles tissue distribution. Blood Clearance Kinetics C57 BL/6 J mice were divided into three groups and intravenously injected via tail vein with free DOX, DOX-PLGA, and DOX-PLGA@CM at a DOX equivalent dose of 2.5 mg kg −1 . At predesigned time points (0.5, 1, 2, 4, 10, and 24 h), blood samples were collected from the orbital vein of the mice (n = 3). The DOX amount in the blood was quantitatively determined by a fluorospectrometer following our previously reported method [32]. Antitumor Therapy in the Breast Cancer Brain Metastases Model Once obvious brain metastatic tumor signal observed by the in vivo imaging system, around three weeks after the inoculation of the cancer cells, the mice were randomly divided into 4 groups, including saline, free DOX, DOX-PLGA, and DOX-PLGA@CM, and received intravenous administration of the corresponding treatments at a DOX equivalent dosage of 2.5 mg kg −1 twice per week. The progression of brain metastases was monitored by bioluminescence imaging twice per week. Histological Analysis In a separate study, on day 15 (2 days after mice received the last treatment), three mice from each group were sacrificed, the brains were isolated for H & E staining to evaluate the antitumor effect, and other major organs including heart, liver, spleen, lung, and kidney were collected for H&E histological assay for toxicity evaluation. Statistical Analysis All data were displayed as mean ± standard deviation (SD) (n ≥ 3), and the statistical significance was analyzed by GraphPad Prism 7.0 (GraphPad Prism Software Inc., San Diego, California) using Student's t test or ANOVA with Tukey's significant, excepting Mantel Cox-test for survival analyses. Differences were considered significant when the p value was less than 0.05. Nanoparticle Characterization The DOX-loaded PLGA nanoparticles (termed as DOX-PLGA) were fabricated according to our previously reported emulsification method [32]. During the preparation, a moderate amount of triethylamine was added to the solvent to increase the hydrophobicity of DOX, which promotes a higher loading efficiency and reduces drug leakage. To extend the half-life of DOX-PLGA in circulation system by minimizing the phagocytosis clear of the reticuloendothelial system (RES) and facilitate the traversing of the BBB and realizing homotypic targeting to BMBC [33], cell membrane (CM) from MDA-MB-231/Br cells with high brain metastatic property was employed to camouflage the surface of the nanoparticle by a co-extrusion method to yield a CM cloaked DOX-PLGA (DOX-PLGA@CM). The hydrodynamic size of DOX-PLGA@CM was 155.6 ± 8.6 nm (Fig. 1b), which was a little larger than its parental DOX-PLGA nanoparticle (146.1 ± 7.9 nm) (Fig. 1a), mainly due to the coating of the CM. Along with the coating, the surface charge of nanoparticles decreased from −17.0 mV (DOX-PLGA) to −22.1 mV (DOX-PLGA@CM), which was close to that of CM formed vesicles (−24.5 mV, Fig. 1c). The morphology of nanoparticles was observed by transmission electron microscope (TEM). DOX-PLGA exhibited a spherical structure, coupled with a bared and smooth surface . 1a). In contrast, DOX-PLGA@CM showed an apparent core-shell structure, where a shell with a thickness of 8 nm attached to the PLGA core (Fig. 1b), which evidenced the successful fusion of DOX-PLGA with the CM vesicles of BMBC. The final LC and LE were 1.24% and 26.04%, respectively. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by Coomassie brilliant blue staining confirmed that DOX-PLGA@CM has the same set of protein bands as that of pure CM, indicating the retaining of cell membrane proteins after their assembling onto the nanoparticle (Fig. 1d). All the above results validated the successful formation of camouflage nanoparticles with the cell membrane of BMBC. In contrast to the previously reported DOX@PLGA nanoparticle, which exhibited a biphasic drug release profile [34], DOX release from DOX-PLGA@CM was free of a burst release phase (Fig. 1e). This feature may attribute to the diffusion barrier formed after the coating of the cell membrane. Another possibility is that surface adsorbed DOX and some superficially loaded DOX and may have released from nanoparticles during the coating procedure due to the free diffusion of DOX molecule and mechanical extrusion through the polycarbonate membrane. In addition, it was revealed that DOX release at pH 5.0 (represented lysosomal pH value) was faster than that at pH 7.4 (represented extracellular pH value), suggesting DOX protonation and the rupture of the membrane increased its solubility at the acidic environment. Meanwhile, there was no apparent aggregation, and little change in DLS size of DOX-PLGA@CM over 24 h in PBS supplemented with 10% FBS at 37 °C (Fig. 1f), or within 1 week in PBS at 4 °C (Fig. 1g), indicating its outstanding colloidal stability, which may attribute to a strong repulsion force between the highly negatively charged (−22.1 mV) particles. Cellular Uptake of the DOX-PLGA@CM Nanoparticle To investigate the homotypic targeting effect of DOX-PLGA@CM, its cellular uptake by MDA-MB-231/Br cells and their parental ones was evaluated with fluorescence microscopy and flow cytometry (FCM). The fluorescence intensity in MDA-MB-231/Br cells treated with DOX-PLGA@CM was significantly higher than that in DOX-PLGA-and free DOX-treated cells (Fig. 2a), and this difference was further confirmed by FCM (Fig. S1). The enhanced internalization effect of DOX-PLGA@CM nanoparticles is attributed to the homotypic adhesive interactions between the membrane proteins of BMBC and its cell source [35]. In contrast, the cellular uptake of DOX-PLGA@CM by parental MDA-MB-231 cells was only slightly enhanced ( Figs. 2b and S2), suggesting that the homogenous homing ability of cell membrane camouflage was highly specific due to the differentiated protein expression on cell membrane (Fig. S3), including their abundance and composition. It is noteworthy noting that, besides high retention of DOX in the cell nucleus in nanoparticles treated cells, there was still a significant amount of DOX distributed in the cytoplasm, which could serve as drug depots to continuously supply DOX to the nucleus. Consequently, the DOX concentration in the nucleus could be maintained at a higher level than the cells treated with free DOX. To verify whether the boosted cellular uptake of DOX-PLGA@CM by MDA-MB-231/Br cells can be translated into an enhanced cell killing effect, we evaluated the cytotoxicity of DOX-PLGA@CM and DOX-PLGA by MTT assay. Figure 2c-d reveals that DOX in all formulations exhibited dose-dependent toxicities to MDA-MB-231 and MDA-MB-231/Br cells. Free DOX and DOX-PLGA exhibited similar cytotoxicity to both cells within the low concentration range (0.005-0.05 µM), while discrepant cytotoxicities were observed at a high dose (0.1 µM). As expected, DOX-PLGA@CM exhibited significantly higher toxicity than DOX-PLGA for both cells in the high dose range. Strikingly, DOX-PLGA@CM killed more MDA-MB-231/ Br cells than free DOX in the concentration range from 0.1 to 5 µM, while this phenomenon was not observed in the parental cells, which may attribute to the homotypic targeting effect of CM (Fig. 2a). These discrepant cytotoxicities visually presented from the IC 50 The IC 50 of DOX-PLGA@ CM for MDA-MB-231/Br cells was 0.289 µΜ, which was significantly lower than that of DOX-PLGA (0.865 µΜ). However, there was nearly no difference in the IC 50 between the DOX-PLGA@CM (0.805 µΜ) and DOX-PLGA (0.809 µΜ) for MDA-MB-23 cells. To further probe the potential side effect of DOX-PLGA@ CM for normal cells, the cytotoxicity of DOX-PLGA@CM for NIH3T3 cells was investigated. Figure S4 reveals that DOX-PLGA@CM is much less potent in killing NIH3T3 cells than free DOX, suggesting improved therapeutic window for DOX-PLGA@CM in treating BMBC, which was due to the low activity of DOX-PLGA@CM in entering NIH3T3 cells (Fig. S5). In Vitro BBB Penetrating Effect of the DOX-PLGA@CM Nanoparticle To investigate whether the cloak of brain metastatic CM on nanoparticles surface could facilitate nanoparticles transport through the BBB in vitro, we examined the cellular uptake of Nile red loaded PLGA and PLGA@CM by a human brain microvascular endothelial cell (hCEMC/D3), the main component of BBB, by confocal microscopy analysis. As shown in Fig. 3a, the fluorescence signal in hCEMC/D3 increased along with co-incubation time. Moreover, a stronger fluorescence signal was observed in hCEMC/D3 treated with Nile-PLGA@CM than that treated with plain Nile-PLGA nanoparticles at all time points. These results were highly consistent with previous literature report [21], and indicated that the camouflage of brain metastatic cell membrane significantly promoted the internalization of nanoparticles by hCEMC/D3, which was a critical prerequisite for penetrating through the BBB. Following the cellular uptake, the in vitro BBB penetrating efficiency of nanoparticles was investigated in an in vitro BBB model, as shown in Fig. 3b, where hCEMC/D3 were seeded and cultured on a Transwell insert to form an intact monolayer to mimic brain microvascular endothelial cell layer. In this model, the value of transendothelial electrical resistance (TEER) is an effective indicator to monitor the formation of an endothelial monolayer. When the value of TEER was larger . *p < 0.05, **p < 0.01, ***p < 0.001. n.s., not significant than 200 Ω cm 2 , it meant that the integrity and permeability of the cell monolayer had achieved a similar level as that of in vivo BBB, and therefore could be used for BBB penetration assay [36]. Nile-PLGA and Nile-PLGA@CM nanoparticles were added to the upper chamber, respectively. After being incubated for different time intervals, the penetrated nanoparticles in the lower chamber were collected and quantified by a fluorescence spectrometer. As shown in Fig. 3c Pharmacokinetic Properties of the DOX-PLGA@ CM Nanoparticle Theoretical study and research practice have extensively proved that cancer cell membrane camouflaged nanoparticles would inherit most of essential membrane features of their original cells, such as excellent immune escape and prolonged blood circulation by avoiding the phagocytosis and clearance by reticular endothelial system (RES) due to the retained membrane proteins (Fig. 1d), for instance, the "Do not Eat Me" CD47 signal [37]. Therefore, we further studied the pharmacokinetics profiles of DOX-PLGA@CM, DOX-PLGA, and free DOX to verify whether the CM coating can prolong the blood circulation time of DOX-PLGA. As shown in Fig. 4a, free DOX was quickly eliminated from the body, evidenced by the blood elimination half-time (T 1/2 ) of 3.37 h. The T 1/2 of DOX-PLGA was moderately prolonged to 5.31 h, partially due to the spherical shape and smooth surface of PLGA nanoparticles, which reduced the influence by shearing in the blood [38]. Strikingly, the T 1/2 of DOX-PLGA@ CM increased to 8.89 h, significantly longer than those of free DOX and DOX-PLGA. Meanwhile, DOX-PLGA@ CM possessed the highest area under the curve (AUC 0-∞ ) (94.49 µg L −1 h −1 ) compared with free DOX (21.21 µg L −1 h −1 ) and DOX-PLGA (48.50 µg L −1 h −1 ). These results confirmed that the coating of CM onto the nanoparticle surface could notably prolong the circulation time of the nanoparticles in the blood and thus increased their opportunity in crossing the BBB. Biodistribution of the DOX-PLGA@CM Nanoparticle Encouraged by the excellent performance of CM camouflaged nanoparticles in vitro BBB penetration and in vivo pharmacokinetics, we then estimated whether CM coating c Ex vivo fluorescence imaging of DIR labeled nanoparticles in the brain of C57BL/6 J mice 3 h after injection and its corresponding quantified fluorescence intensity in the brain d. e In vivo distribution of DOX-PLGA@CM in brain metastases of breast cancer mouse model. Images were taken 3 h after the i.v. injection of DIR labeled nanoparticles. f Ex vivo fluorescence images and bioluminescence images for the treatment in (e). The quantitative fluorescence intensity in the brain g for the treatment in (f). Data represent mean ± s.d. for n = 3. ***p < 0.001 could assist the nanoparticles in traversing across the BBB in vivo in healthy C57BJ L −1 mice. As shown in the in vivo imaging in Fig. 4b, a strong fluorescence signal was presented in the brain of DIR-PLGA@CM-treated mice 3 h post-injection. In contrast, there was only a weak fluorescence signal in the brain of DIR-PLGA-treated mice at the same time interval. The mice were sacrificed, and ex vivo imaged to quantify the nanoparticle distribution in the brain. Consistent with the in vivo imaging results, the ex vivo fluorescence signal in DIR-PLGA@CM-treated mice brain was much stronger than that of DIR-PLGA-treated mice brain (Fig. 4c), and the difference between them in total flux intensity was around 3.2 folds (Fig. 4d). These results qualitatively and quantitatively evidenced the boosting effect of the membrane of brain metastatic breast cancer cells on propelling nanoparticles across BBB to the brain of healthy mice. Establishment of a Breast Cancer Brain Metastases Model Cancer brain metastasis is an indicator of high malignancy and poor prognosis [6,39]. For HER2-positive breast cancer patients, more than 30% will progress to brain metastases [6]. Unfortunately, it is still hard to construct a brain metastases model from primary and secondary breast tumors until now. One commonly adopted strategy is direct intracranial implantation of primary cancer cells by stereotactic microinjection to mimic brain metastases and primary glioma. However, this model poorly features the multifocal and infiltrative growth of natural metastasis, especially the BBB integrity is compromised during the operation [40]. Herein, MDA-MB-231 (231/Br), a brain-metastasizing and brainhoming breast cancer cell line derived after two rounds of selection through intracardiac injection and resection from the brain [41], was adopted to construct a breast cancer brain metastases model. 231/Br cells were further engineered to stably express luciferase to yield 231/Br-Luc cells. Figure S6a-b confirms that the luminescence intensity was proportional to the population of the cancer cells. In our study, 231/ Br-Luc cells were intracardiac injected into the mice (Fig. S6c) to establish a breast cancer brain metastasis model. Figure S6e-f proves that a brain metastasis tumor model was successfully established 2-3 weeks post-injection despite initially diffused distribution (Fig. S6d). Since the brain tumor colony was formed after 231/Br-luc cells cross the BBB with their unique brain-homing ability, the integrity of the BBB was well-preserved in the model. Targeting Effect of the DOX-PLGA@CM Nanoparticle in a Brain Metastatic Tumor Model To investigate whether the metastatic cancer cell membranecoated PLGA nanoparticles can traverse the BBB and target a brain metastatic tumor after systemic administration, DOX was replaced with DIR fluorescence probe during the fabrication of DIR-PLGA and DIR-PLGA@CM. Three hours post-injection, IVIS whole body imaging detected strong fluorescence signals in the brain of DIR-PLGA@CM-treated mice (Fig. 4e), mainly located at the bioluminescence signal illumined region. In contrast to others reported PLGA nanoparticle distribution in the brain tumor model established through intracranial implantation, there was nearly no fluorescence signal detected in the brain of DIR-PLGA-treated mice, which validated the integrity of the BBB for our brain metastatic tumor model. To more accurately quantify the distribution of the nanoparticle in different organs, animals were sacrificed to collect the organs for ex vivo imaging. Consistent with their in vivo imaging findings, the fluorescence signals in the isolated brain of DIR-PLGA@CMtreated mice overlapped nicely with that of luminescence signals occupied region (Fig. 4f), suggesting PLGA@CM nanoparticle could effectively cross the BBB and target the metastatic tumor in the brain. On the contrary, only a faint weak fluorescence signal was localized in the brain region of the DIR-PLGA-treated mice, suggesting the commonly accepted enhanced permeability and retention effect (EPR) of tumor is very limited for the brain tumor model established through the intracardiac injection of brain homing cancer cells [21,42], especially in its early stage of tumor growth when the BBB is intact. There were 3.2 times of difference in total flux intensity between the CM camouflaged nanoparticle and its plain counterpart (Fig. 4g). The results shown in Fig. 4 validated that the coating of metastatic cancer cell membrane could bestow the PLGA nanoparticles with the ability to traverse BBB and target metastatic brain tumor. Tumor Growth Inhibitory Effect of the DOX-PLGA@CM Nanoparticle Cheered by the in vivo prolonged blood circulation time, outstanding performance in traversing the BBB, and homologous homing effect to metastatic brain tumor of PLGA@ CM, we further investigated the antitumor efficiency of DOX loaded PLGA@CM (DOX-PLGA@CM) in the above-established breast cancer brain metastases model. The detailed treatment schedule is presented in Fig. 5a. Three weeks postintracardiac injection of 231/Br-Luc cells, mice developed similarly brain metastatic tumor burdens (based on the luminescence intensity in the brain) were randomly divided into four groups. They received saline, free DOX, DOX-PLGA, and DOX-PLGA@CM treatments via i.v. injection every 2-3 days at the DOX dose of 2.5 mg kg −1 . Bioluminescence imaging was employed to monitor the progression of the brain metastasis tumor. At the same time, the anticancer effect of DOX-PLGA@CM was measured by quantitatively evaluating the bioluminescence signal intensity in the brain region. As shown in Fig. 5b-c, brain metastases of breast cancer in the control group (saline) grew rapidly. It is worth noting that some mice with a relatively low tumor burden died in their early stage (at 8th day), possibly due to the invasive growth of brain metastases without any intervention and concomitant fatal compression of critical regions in the brain, which further proved the destructiveness and complication of BMBC. Meanwhile, in the free DOX-and DOX-PLGA-treated groups, minor brain metastases growth retardation effects were elicited after the corresponding treatments and some mice dead at the time of lower BMBC growth signal during the treatment similar to the control group, attributing to their rapid blood elimination (Fig. 4a), undesired BBB permeability (Figs. 3 and 4b), and poor distribution in the brain metastases region (Fig. 4). In distinct contrast, except for one animal, there was nearly no bioluminescence signal increase and no mice died in DOX-PLGA@ CM-treated mice during the course of treatment, suggesting the super inhibitory effect of DOX-PLGA@CM. In addition, Kaplan-Meier survival analysis further demonstrated that systemic treatment of DOX-PLGA@CM could effectively extend the survival of the mice with brain metastatic tumor. The median survival time for mice received DOX-PLGA@ CM treatment was 59 days (Fig. 5d), which was significantly longer than that of saline (37 days), DOX-PLGA (44 days), and free DOX (48 days) treated ones. It was noteworthy that the median survival time for free DOX-treated mice was a little bit longer than that of DOX-PLGA-treated ones, which might be ascribed to the poor distribution of DOX-PLGA in brain metastatic tumor (Fig. 4) due to a limited EPR effect and relatively slow drug release inside cancer cells. In a separate cohort of mice that received various treatments, the mice were sacrificed on the third day after the last administration (day 15). Their brains and other major organs were collected for histological analysis. H&E staining (Fig. 5e) revealed that numerous metastatic lesions presented throughout the brain in the control group, representing one of the toughest challenges encountered by regular chemotherapy [12]. Similar to that in control, many micrometastasis lesions were detected in the brain of the mice treated with DOX-PLGA and free DOX. In contrast, only some sporadic micro-metastasis in the brain of mice treated with DOX-PLGA@CM, indicating that DOX-PLGA@CM intervention not only reduced the size of brain metastases but also reduced the number of micro-metastasis lesions, which was consistent with the bioluminescence signal shown in Fig. 5b. Systemic Toxicity of DOX-PLGA@CM Nanoparticle During the course of treatment, there was no significant weight loss among all treatment groups (Fig. S7). The systemic toxicity of the treatments was further investigated through histology assay. Attributed to the relatively low dose of DOX (2.5 mg kg −1 ) given in the treatments, no apparent acute toxic damage was noticed in the major organs, including myocardial injury (Fig. S8), suggesting the excellent biocompatibility and safety of DOX-PLGA@CM for the treatment of breast cancer brain metastasis. Discussion The superiority of DOX-PLGA@CM in fighting against BMBC mainly ascribes to the coating of multifunctional MDA-MB-231/Br cell membrane (CM). Firstly, the camouflage-like clothing of CM bestows DOX-PLGA@CM with prolonged blood-circulation time, (half-lives, Fig. 4a) [43], which would increase the chance of the interplay between DOX-PLGA@CM and BBB. Secondly, the inherited BBB penetrating ability of CM from brain metastatic cells (Fig. 1d) makes DOX-PLGA@CM traversing across the BBB and entering the brain parenchyma easily (Fig. 4b) [21,26]. Moreover, the general homologous targeting effect of CM [35] makes permeated nanoparticles actively homing to the BMBC (Fig. 4e-f). Consequently, DOX-PLGA@CM effectively inhibited the progression of BMBC. Combined In vivo bioluminescence imaging of brain metastases of breast cancer at indicated time. c Quantitative bioluminescence intensity of cancer brain metastases corresponding to panel (b). d Kaplan-Meier survival curves of brain metastases mice received the indicated treatments. n = 5. e Representative whole brain H&E staining of brain isolated from mice received the indicated treatments. *p < 0.05, **p < 0.01 with its wider therapeutic window than free DOX (Figs. 2c and S4), DOX-PLGA@CM could be a safe tool for the treatment of BMBC (Figs. S7 and S8). Conclusions In summary, a Trojan horse approach-based nanocarrier developed by integrating the cell membrane of a brain-homing cancer cell and a PLGA drug depot has been explored for the treatment of brain metastatic breast cancer. With the help of the cell membrane coating, DOX-PLGA@CM nanoparticles were free of burst release, which is a common feature associated with most PLGA nanoparticles. Furthermore, due to the homotypic effect of the cell membrane of MDA-MB-231/Br cells, DOX-PLGA@CM exhibited enhanced cellular uptake and boosted killing potency for MDA-MB-231/Br cells. Functionalized with naturally selected molecules for BBB penetration, DOX-PLGA@ CM showed an extended half-life and effectively crossed the BBB in both healthy and early-stage BMBC mouse models. Consequently, DOX-PLGA@CM reached the metastatic tumor lesions in the brain, slowed down cancer progression, reduced tumor burden, and extended the survival time for the BMBC. Benefitting from the easiness of its fabrication and its significant anticancer effect, DOX-PLGA@CM opens a new window for BMBC and other brain metastatic cancers therapy.
2023-02-01T15:23:54.749Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "666478cef79dceb0989d28a33f9b1ac25e530cc1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40820-022-00861-1.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "666478cef79dceb0989d28a33f9b1ac25e530cc1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
245135936
pes2o/s2orc
v3-fos-license
THERMODYNAMIC AND KINETIC FEATURES OF THE FORMATION OF AMORPHOUS STATE IN FILMS DURING QUENCHING FROM THE VAPOR STATE The method of modernized ion-plasma sputtering produced metastable states, including nanocrystalline and amorphous phases in films, even in alloys whose components do not mixed in the liquid state. The effective rate of energy relaxation at different modes of precipitation is theoretically estimated to be 10-10 K/s during ion-plasma sputtering of atoms. On thermodynamic and kinetic states, different active and passive parameters for amorphization during sputtering are analyzed. The receiving expressions are in good agreement with the experimental results and contribute to the determination of further steps to obtain an amorphous state. Introduction Interest to materials based on components with a very limited mutual solubility in the liquid state has recently increased significantly [1][2][3][4]. The peculiarities of such systems include: a large difference in the specific gravity of the components, the presence of a monotectic transformation, a tendency to exfoliation in the liquid state in a wide temperatureconcentration interval which unambiguously indicates on high positive heat of mixing of the alloy components. These factors significantly complicate the use of immiscible components in industry and high technology. However, the application of extremely nonequilibrium conditions for obtaining or processing a material allows us to overcome the effects of positive enthalpy of mixing and to obtain a new class of promising materials. It is also known that the quenching process from a liquid state with cooling rates of 10 6 -10 8 K/s is accompanied by the appearance of high internal pressures, often leading to the formation of high-pressure phases [5]. The effective rate of energy relaxation during ion-plasma sputtering of atoms is theoretically estimated to be 10 12 -10 14 K/s. This makes it possible to speak of quenching from the vapor state and opens the possibility of the appearance of high-pressure phases even during ion-plasma sputtering. All this and a many of immiscible systems (about 200 systems of metal-metal type) determine the relevance and prospects of research of a new class of materials for the industry. To date, sufficiently many single-phase alloys have already been obtained in systems of immiscible components characterized by anomalously large positive mixing energy by different methods, primarily quenching from the vapor phase, amorphous phases and highly supersaturated solid solutions [6][7][8]. The purpose of this work was to determine the effect of the extremely high cooling rates achieved in the modernized ion-plasma sputtering method and thermodynamic and kinetic features on formation of amorphous state in alloys, whose components practically do not mix even in liquid state. It is known that in systems with positive mixing energy the energy barriers for the formation of homogeneous structures are rather high [2]. To overcome them, it is necessary that the kinetic energy of atoms falling on the substrate exceeds the height of these barriers. Results and their discussion The nonequilibrium condensation of substances from vapor is influenced by a few thermodynamic and kinetic parameters: free path, entropy jump, rate of temperature relaxation on the substrate, rate of concentration change [1,2]. Each of these parameters affects the formation of the amorphous state, therefore, for more convenient control of this process, it is useful to reduce these parameters into a single criterion. It was noted in [3] that adhesion forces act above the substrate surface, capable of forming quasicrystalline structures with a density of atoms and gas molecules of ~ 10 29 m -3 , in which at a temperature of 300 K the bonding forces between atoms are very weak. When metal atoms approach to the substrate in a vaporous state with a low flux density under conditions of ion-plasma sputtering, then these adhesive forces manifest themselves more actively than the forces of atomic interaction of gases. The paper proposes the following model for the formation of a nonequilibrium condensate on a substrate (Fig. 1). Let us conditionally single out a certain number of sputtered atoms N, the velocity of which in the direction of the substrate is ν. Under the influence of adhesive forces, their density will change depending on the location relative to the substrate and will be a function of time: n(τ). Formally, the existence of an inverse function is also possible: τ(n). Let us determine the thermodynamic factor that will allow us to set the limiting size of the area of changed density in the classical form [9]: where: σ is surface energy; n1 is the concentration of the new phase; ∆μ12 is the change in the value of the chemical potential during the phase transition. This change can be written as follows: where Sа and Va are, respectively, the entropy and volume per atom of the substance. The change in the value of the chemical potential can be defined as a functional: ∆μ12 = ∫ νn -1 (νpn -1 -Sa νТ)dn (5) where n1= Va -1 is a new phase concentration. Using relation (5), equation (1) The following analytical conclusions can be drawn from (6): the quantities νТ, νn, νp, Sа, σ, n1 determine the critical limit, upon passing which the atoms are localized by the fluctuation mechanism and form regions with a new density and size larger than Rk. Under these conditions, the arising thermodynamic forces maintain the new state of this region. That is, this state becomes thermodynamically favorable, and the system also undergoes a phase transition. In this case, if the size of the fluctuation region with a new density is less than Rk, then the thermodynamic forces contribute to the disappearance of such formations. The new phase becomes thermodynamically disadvantageous, and the system will not experience phase transformations: that is, the condition of amorphization of the substance is satisfied. At the same time, the size of density fluctuations due to relaxation processes naturally has limitations. According to [6], the mean free path, which follows from the molecular kinetic theory with a correction factor φ, is: The free path λ is a kinetic parameter that significantly affects the formation of an amorphous state (AS) of a substance, limiting the size of fluctuations; therefore, it is difficult to use it in equation (6). The largest size that a fluctuation can acquire cannot exceed λ. If this value is less than Rk, then the general condition for amorphization, which includes all parameters affecting this process, has the form: In the case of a one-component system, this condition has the following form: with an increase in the values of Sа and νТ, inequality (8) and condition (2) intensify, which will contribute to the amorphization process: thus, substances with a high evaporation temperature amorphized more easily than substances with a lower evaporation temperature; 3) lowering the substrate temperature decreases the temperature of condensed atoms and enhances the third condition for amorphization. However, there is a value of νp, which is currently difficult for experimental determination, but its effect on the process of amorphization of a substance is also significant. Conclusions Through analyzing the above equations, the following conclusions can be drawn: 1) with an increase in the values of the parameters νТ, νn, Sа, σ, n1, Тс, d and a decrease in T, νp, n1/n, the probability of the formation of an amorphous state increases. 2) active parameters of influence on the system for its amorphization are: νТ, νn, νp, n, and Т. These values are basically determined by external factors: νnkinetic energy of condensed atoms; Tsubstrate temperature; νpthe rate of change in the pressure of the adhesive field; nsteam pressure. The degree of activity of these parameters is determined by the ratio of internal and external factors. 3) passive factors of amorphization, which are determined by the type of substance, which do not depend on external conditions, are Sа, σ, n1, Тс, d. These parameters determine the tendency of a substance to amorphization. The listed factors and their effect on the amorphization process are in good agreement with the experimental results and contribute to the determination of further steps to obtain an amorphous state.
2021-12-15T16:20:07.152Z
2021-09-07T00:00:00.000
{ "year": 2021, "sha1": "0190d0504745a59e7b7aa8b9367ee1a676117074", "oa_license": null, "oa_url": "http://jphe.dnu.dp.ua/index.php/jphe/article/download/128/120", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "889291843fdddb023e7f28045aa17d0d2e41bfb6", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [] }
20890903
pes2o/s2orc
v3-fos-license
Anxiety, depression and relationship satisfaction in the pregnancy following stillbirth and after the birth of a live-born baby: a prospective study Background Experiencing a stillbirth can be a potent stressor for psychological distress in the subsequent pregnancy and possibly after the subsequent birth. The impact on women’s relationship with her partner in the subsequent pregnancy and postpartum remains uncertain. The objectives of the study were 1) To investigate the prevalence of anxiety and depression in the pregnancy following stillbirth and assess gestational age at stillbirth and inter-pregnancy interval as individual risk factors. 2) To assess the course of anxiety, depression and satisfaction with partner relationship up to 3 years after the birth of a live-born baby following stillbirth. Methods This study is based on data from the Norwegian Mother and Child Cohort Study, a population-based pregnancy cohort. The sample included 901 pregnant women: 174 pregnant after a stillbirth, 362 pregnant after a live birth and 365 previously nulliparous. Anxiety and depression were assessed by short-form subscales of the Hopkins Symptoms Checklist, and relationship satisfaction was assessed by the Relationship Satisfaction Scale. These outcomes were measured in the third trimester of pregnancy and 6, 18 and 36 months postpartum. Logistic regression models were applied to study the impact of previous stillbirth on depression and anxiety in the third trimester of the subsequent pregnancy and to investigate gestational age and inter-pregnancy interval as potential risk factors. Results Women pregnant after stillbirth had a higher prevalence of anxiety (22.5%) and depression (19.7%) compared with women with a previous live birth (adjusted odds ratio (aOR) 5.47, 95% confidence interval (CI) 2.90–10.32 and aOR 1.91, 95% CI 1.11–3.27) and previously nulliparous women (aOR 4.97, 95% CI 2.68–9.24 and aOR 1.91, 95% CI 1.08–3.36). Gestational age at stillbirth (> 30 weeks) and inter-pregnancy interval <  12 months were not associated with depression and/or anxiety. Anxiety and depression decreased six to 18 months after the birth of a live-born baby, but increased again 36 months postpartum. Relationship satisfaction did not differ between groups. Conclusion Women who have experienced stillbirth face a significantly greater risk of anxiety and depression in the subsequent pregnancy compared with women with a previous live birth and previously nulliparous women. Background It is well known that a stillbirth affects women's mental health in the short term with increased risk of anxiety, depression and posttraumatic stress [1][2][3][4][5]. Although psychological sequelae persist for some women [2,6], symptoms of anxiety and depression seem to decrease within the first 1-2 years after a loss [2,7]. A strong desire to become pregnant again is common among couples that experience perinatal loss, and about 50% embark on a new pregnancy within a year [8,9]. The subsequent pregnancy could be regarded as an emotional stressor that may interfere with the normal grief process [10][11][12][13]. Observational studies describe elevated levels of depressive symptoms [8], posttraumatic stress symptoms [12], anxiety symptoms [8,14] and reduced levels of prenatal attachment [14] in pregnancies subsequent to stillbirth. However, the prevalence of psychiatric disorders among women pregnant after stillbirth remains unknown. Studies are also conflicting as to whether or not the symptoms of anxiety and depression diminish after the birth of a healthy baby [8,15,16]. Some researchers suggest that women are more vulnerable to anxiety, depression and posttraumatic stress when a new conception occurs soon (< 12 months) after the stillbirth [8,12]. On the other hand, the degree of grief and psychological distress may manifest itself even stronger if a woman struggles for a long time to become pregnant again [17,18], and women pregnant after a previous loss may show less symptoms of depression compared with their non-pregnant counterparts [7]. Gestational age at the time of pregnancy loss may influence the degree of psychological distress, and grief reactions may be stronger among women with late losses [15,19,20]. However, third trimester losses are found to be associated with less anxiety compared with second trimester losses [7]. There has been a long-standing recognition that mental health problems like anxiety and depression, as well as marital dissatisfaction, are likely to co-occur. A woman's relationship with her partner may be affected by pregnancy loss. While some find that the risk of subsequent partnership breakdown is increased [21,22], others find no such association [23]. To our knowledge, there is little data on the effects of a previous stillbirth on partner relationship in the subsequent pregnancy. Establishing the effects of a previous stillbirth on womens mental health during and after a subsequent pregnancy and identifying risk factors for anxiety and depression provides a base to improve health care guidelines. One of the main challenges when doing research in this field is the relatively low incidence of stillbirth in industrialised countries. Therefore, most studies in this field are limited by small sample sizes without adjustments for confounders or are retrospective case-control studies with imminent risks of methodological bias. The objective of the present study was to estimate the prevalence of anxiety and depression in the subsequent pregnancy after stillbirth and to assess gestational age at the time of stillbirth and inter-pregnancy interval as individual risk factors. We also wanted to investigate the course of anxiety and depression as well as satisfaction with partner relationship from the second trimester and up to 36 months after the birth of a live-born baby. Methods This study is based on selected data from the Norwegian Mother and Child Cohort Study (MoBa) and on records from the Medical Birth Registry of Norway (MBRN). MoBa is a prospective population-based pregnancy cohort study conducted by the Norwegian Institute of Public Health [24]. Participants were recruited from all over Norway from 1999 to 2008. The pregnant women consented to participate in 41% of invited pregnancies and the cohort now includes more than 95,000 women, 75,000 men and 114,000 children [25]. After registering for a routine ultrasound examination at approximately 17 weeks of gestation, all women received a postal invitation, which included an informed consent form and the first questionnaire. Follow-up is conducted by questionnaires at regular intervals. The current study is based on version VIII of the quality-assured data files released for research on 14th of February 2014 and reports data collected from 1999 to 2012. The MBRN is based on compulsory notification of all live births, stillbirths and late miscarriages or terminations of pregnancy and includes information on current pregnancy and delivery as well as previous pregnancies [26]. This sub-study included women participating in MoBa, who were pregnant subsequent to a stillbirth, and two reference groups: 1) women with at least one live birth and no previous stillbirth and 2) nulliparous women. Women not responding to the first MoBa questionnaire or with missing MBRN data were excluded. For all three groups only women with singleton or twin pregnancies and with the MoBa pregnancy resulting in a live birth, were included. Results of the previous pregnancies were identified using data from the MoBa questionnaires and verified by information from the MBRN. Stillbirth was defined according to the World Health Organizations International Statistical Classification of Diseases 10th revision, ie, fetal death at 22 or more completed gestational weeks or birthweight > 500 g [27]. Aside from the selection criteria, the reference women were randomly selected from the entire MoBa cohort. A previous study reported high levels of depression symptoms among 28% of women pregnant after stillbirth compared with 8% of controls [8]. Assuming a prevalence of 25% for depression or anxiety in the subsequent pregnancy after a stillbirth and 10% for reference women, a sample size of N = 100 in each group yields 80% power for detecting differences of this magnitude using a 5% significance level. We identified 197 women in the MoBa cohort who had experienced stillbirth in their previous pregnancy (previous stillbirth group). The reference groups included 394 women with a live birth in their previous pregnancy (previous live birth group) and 394 nulliparous women (previously nulliparous group). We assessed data from questionnaires answered in gestational weeks 17 and 30, and 6, 18 and 36 months after the delivery of a live-born baby. Background variables from the MBRN for both the MoBa pregnancy and the previous pregnancy (previous stillbirth or live birth), were also assessed. At the second assessment (30 gestational weeks) 174 women with a previous stillbirth, 362 with a previous live birth and 365 nulliparous women completed the questionnaire. A flowchart for the selection of the substudy population is provided (Fig. 1). Outcome measures Depression and anxiety was measured using short versions of the Hopkins Symptom Checklist (SCL) [28] shown to correlate highly with the total score of the original scale, and to have good psychometric properties [29,30]. We used two 4-item subscales measuring anxiety and depression during the previous 2 weeks (SCL-4a and SCL-4d). A combined score was used in pregnancy week 17. Items were scored on a Likert scale ranging from 1 ("not at all bothered") to 4 ("very much bothered"). We defined a mean score > 1.75 on SCL-4a and/or SCL-4d as presence of anxiety and/or depression [31]. Cronbach's alpha of internal consistency ranged from 0.69-0.80 for the anxiety subscale and 0.77-0.81 for the depression subscale. A five-item version of the Relationship Satisfaction Scale (RS) was used to assess maternal relationship satisfaction among married/cohabiting women [32]. Developed for the MoBa study, the RS is based on core items from previously developed measures of marital satisfaction and relationship quality [33][34][35]. The RS correlates 0.92 with the Quality of Marriage index [36] and has a high ability to predict future break-up/divorce and life satisfaction [32,37,38]. The abbreviated five-item version (RS5) correlates 0.97 with the full 10-item version [32]. Each item is rated on a 6-point (1-6) Likert scale, and the total score is the mean score of all items. An average score below 4.0 implies a relatively high risk of break-up (11-15%) [32] and a score ≥ 4 was applied as cut-off to denote relationship satisfaction in this study. Cronbach's alpha ranged from 0.87 to 0.90. Covariates Sociodemographic, health related and obstetrical history factors were considered as potential confounders for the association between having experienced a stillbirth and anxiety or depression in the subsequent pregnancy. Maternal age at the time of the MoBa delivery (whole years) was retrieved from the MBRN. Co-morbidity was defined as having at least one of the following previous medical problems reported in the MBRN: Asthma, hypertension, recurrent urinary tract infections, kidney disease, rheumatoid arthritis, heart disease, epilepsy, diabetes mellitus, and/or thyroid disease. Other covariates were questionnaire data obtained at gestational week 17 or 30 and included parental status at first assessment (married/cohabiting), native language other than Norwegian, pre-pregnancy daily smoking, high pre-pregnancy body mass index (BMI ≥ 25), low education (high school or less), low income (< 200,000 Norwegian kroner/year) and previous termination(s) of pregnancy or miscarriage(s). Stressful life events were defined as having at least one of the following experiences during the last 12 months: 1) Problems at work or study place, 2) financial problems, 3) divorce/separation/relationship breakup, 4) conflicts with family or friends, 5) serious injury or illness to the woman herself or a loved one, or 6) involvement in a serious accident, fire or robbery. Potential predictors of anxiety and depression in the pregnancy after stillbirth Information on gestational age at the time of stillbirth and inter-pregnancy interval was retrieved from the MBRN. Gestational age at stillbirth (based on last menstrual period and/or ultrasound measurement) was categorised as ≤ 30 weeks or > 30 weeks. Inter-pregnancy interval was defined as number of months between the date of stillbirth and the subsequent conception (estimated by ultrasound measurements) and categorised as < 12 months or ≥ 12 months. Statistical analyses Categorical data were reported as proportions and compared between groups using chi-square tests. Age at the time of the MoBa delivery was reported as mean years and compared between groups using independent samples t-test. To reduce potential sample distortion caused by missing values, the Estimation-Maximation procedure in SPSS was used to impute missing values on SCL-4a, SCL-4d and RS5 if at least 50% of items were present. This resulted in 0.4% missing on SCL-4a, 0.4% missing on SCL-4d and 1.9% missing on RS5 at first assessment. The McNemar's test was used to analyse the differences in frequency of anxiety, depression and relationship satisfaction between different time points. Binary and multivariate logistic regression models were used to estimate odds ratios (OR) and adjusted odds ratios (aOR) for anxiety and/or depression in subsequent pregnancy among women with a previous stillbirth compared with the two reference groups. Covariates that were unevenly distributed between the groups (p < 0.1), associated with the outcome variable in a bivariate model (p < 0.1), and not strongly correlated (correlation coefficient < 0.7), were included in the multivariate analyses. Current age was included in all multivariate models and each final model was checked for interactions. For the stillbirth group, separate binary regression models were used to test if gestational age at stillbirth or inter-pregnancy interval were significant predictors for anxiety or depression in the subsequent pregnancy. To preserve power and reduce the number of comparisons, we combined anxiety and depression in the subgroup analyses. Covariates and anxiety/depression in the third trimester were compared between participants completing all five questionnaires and participants who dropped out at any point after 30 gestational weeks. All data were analysed using the Statistical Package for Social Science version 22.0 (SPSS Inc., Chicago, IL, United States). Two-sided p-values < 0.05 were regarded as significant. Results Background characteristics are presented in Table 1. Women with a previous stillbirth and women with a previous live birth did not differ significantly according to age, but were significantly older than the previously nulliparous women. A high BMI and a low educational level was more prevalent in the previous stillbirth group compared with both reference groups. Women with a previous stillbirth more often reported stressful life events compared with women with previous live births, but not compared with previously nulliparous women. Background characteristics did not differ significantly between participants completing all five questionnaires and participants who dropped out at any point after 30 gestational weeks, with the exception of more smokers among drop-outs in the previous stillbirth group, and more participants with low education and younger age among dropouts in the previously nulliparous group (data not shown). Prevalence of anxiety and depression In the third trimester of pregnancy (30 gestational weeks), women with a previous stillbirth more often experienced anxiety (22.5%) and depression (19.7%) compared with women with previous live births (4.4% and 10.3% respectively) and previously nulliparous women (5.5% and 9.9% respectively) ( The proportion of women with both anxiety and depression in the third trimester was 12.7% among women with a previous stillbirth compared with 3.6% in each reference group (p < 0.001 for both comparisons). The prevalence of anxiety and depression decreased significantly from first assessment to 6 months postpartum among women with a previous stillbirth (p < 0.001 for anxiety and p = 0.031 for depression). By six and 18 months postpartum, respectively, the prevalence of depression and anxiety was not significantly different between groups ( Table 2). From six to 36 months postpartum, the prevalence of anxiety and depression increased significantly in the stillbirth group (p = 0.039 and 0.035 respectively) and the prevalence of anxiety, but not depression, increased significantly in the nulliparous group (p = 0.039) (Fig. 2). At 36 months postpartum, the prevalence of anxiety and depression was higher among women with a previous stillbirth compared with women with a previous live birth, but not compared with previously nulliparous women ( Table 2). The prevalence of anxiety and depression in the third trimester differed among women with a previous stillbirth completing all five questionnaires compared with dropouts at any point after 30 gestational weeks (for anxiety 15.2 vs 32.4%, respectively, p = 0.007 and for depression 12.1 vs 29.7% respectively, p = 0.004). No such differences were observed when comparing drop-outs with responders among women with a previous live birth or previously nulliparous women. Inter-pregnancy interval and gestational age as risk factors for anxiety and depression in the subsequent pregnancy after stillbirth The mean gestational age at stillbirth was 33.5 weeks (95% CI 32.5-34.6, range 20.4 to 42.6) and 115 women (68%) lost their baby at gestational age more than 30 weeks. The median number of months between stillbirth and the subsequent conception was 6 (range 1 to 183 months) and the majority of the women (n = 122, 70.5%) became pregnant within 12 months after the stillbirth. Inter-pregnancy interval shorter than 12 months between stillbirth and the next conception or gestational age at stillbirth > 30 weeks was not significantly associated with higher odds of anxiety and/or depression in the third trimester of the subsequent pregnancy (OR previously nulliparous women). * Significant at the < 0.05 level; ** Significant at the < 0.01 level; *** Significant at the < 0.001 level Relationship satisfaction The frequency of relationship satisfaction among married/cohabiting women decreased slightly in all three groups from the first assessment to 36 months postpartum (p = 0.012 for women with a previous stillbirth, 0.049 for women with a previous live birth, and < 0.001 for previously nulliparous women). There was no significant difference between women with a previous stillbirth and the reference groups at any point (Table 2). Main findings Women with a previous stillbirth had higher prevalence of anxiety and depression in the subsequent pregnancy compared with women with previous live births and previously nulliparous women. The prevalence decreased considerably after the birth of a live-born baby, and was not significantly different from the reference groups by 6 months postpartum for depression, and 18 months postpartum for anxiety. However, 36 months postpartum the prevalence of anxiety and depression had increased and was again significantly higher compared with women with a previous live birth, but not compared with previously nulliparous women. Relationship satisfaction was not significantly different between groups at any time point. Having experienced a late stillbirth (> 30 weeks) or a short interval between stillbirth and the subsequent conception (< 12 months) was not significantly related to anxiety and/or depression in the subsequent pregnancy. Strengths and limitations Allthough symptom levels have been studied previously, to our knowledge this is the first study to estimate the prevalence of anxiety and depression contemporaneously among women pregnant after stillbirth. We are also the first to assess relationship satisfaction in this setting. The data is derived from a large national cohort and our sample size is larger than the majority of previous studies in this field. The prospective design of the present study minimised reporting bias and enabled a long follow-up period. Previous studies have typically made comparisons solely to a control group consisting of either women with previous live births or primigravidas. Applying two reference groups to further explore the psychological impact of stillbirth makes this study unique. The participation rate of 40.6% at first assessment is a weakness, but as expected for population-based studies [39]. A study investigating selection bias in the MoBa study found that there was an under-representation of participants with a number of exposure variables, including previous stillbirth [40]. The same study found that prevalence estimates of exposures or outcomes may be biased due to self-selection, but that self-selection is not a problem in studies of exposure-outcome associations. We therefore argue that our findings can be generalised to other women pregnant after stillbirth. However, we cannot rule out that women with greater psychological distress after a previous stillbirth more often declined participation than women coping better after the incident. Neither can we rule out that women with mild psychological distress may have been less motivated to participate. Further, the data reported was collected over a relatively long time period (from 1999 to 2012) and changes in practice and support may have influenced our findings. Due to ethical limitations regarding linking the MoBa data to the MBRN, the study was approved only to use a limited number of reference women instead of using the entire birth cohort as a reference. However, the prevalence of anxiety and depression among the two reference groups was similar to a control group of women without epilepsy in a previous MoBa sub-study [41]. Although the dropout rate was comparable to other studies of perinatal depression [42], missing data in the follow-up period is a concern regarding the ability of this study to make conclusions about mental health outcomes from 6 months to 3 years postpartum. As anxiety and/or depression in the subsequent pregnancy after stillbirth was more prevalent among drop-outs, anxiety and depression at follow-up is probably underestimated. Unfortunately, we do not have reliable data regarding the prevalence of anxiety and depression before the occurrence of stillbirth. It would also be interesting to compare these women to their nonpregnant counterparts in order to assess whether the prevalence of anxiety and depression are indeed associated with being pregnant. The estimates for anxiety and depression in our study relied on self-reporting using short-form versions of validated screening tools. Even though short-form versions affect the measurement precision, it often remains sufficient for epidemiological purposes [43]. Psychiatric symptoms may be more correctly reported in an anonymous questionnaire than in a clinical interview [44] and questionnaire-based screening tools are often used to estimate the proportion at risk of having a mental disorder in a population. However, it is important to highlight that the screening tools are not suited to make formal diagnoses. The sample size required that data on anxiety and depression were combined in the subgroup analyses on gestational age at stillbirth and inter-pregnancy interval, limiting the generalizability of these analyses. As we did not want to increase the risk of type II errors, adjustments for multiple comparisons were not performed and findings with p-values ≥ 0.01 should be considered with some caution. Interpretation Our findings confirm that anxiety and depression is prevalent in the pregnancy following stillbirth. Hughes et al. [8] found that, compared with primi-gravida, women who were pregnant subsequent to a stillbirth had significantly higher levels of depression and state anxiety during pregnancy, but did not differ significantly from controls in the postpartum period and 12 months postpartum. Armstrong et al. similarily reported decreased levels of depressive symptoms and anxiety three and 8 months after the birth of a subsequently healthy infant among women with a history of perinatal death [17]. This is in accordance with our findings. However, as the follow-up period in our study extends further, we found that the prevalence of anxiety and depression increased again by 36 months post-partum. This may indicate that the subsequent birth of a live-born baby is only temporarily relieving for the psychiatric morbidity associated with stillbirth. Blackmore et al. reported that depression and anxiety associated with a previous prenatal death show a persisting pattern up to 33 months after the birth of a healthy child [18]. The latter study included mainly miscarriages, and only few stillbirths, and it is not specified whether the pregnancy is directly subsequent to the loss. It is therefore not comparable to ours. Contrary to Hughes et al. [8], we did not find that becoming pregnant within 12 months after stillbirth significantly increases the risk of anxiety and/or depression in the subsequent pregnancy [45]. One explanation for the discrepancy may be that the majority of the women in our study conceived within a year after the loss. A Swedish study demonstrated that mothers whose baby had died in utero were given different kinds of advice concerning a suitable time for a subsequent pregnancy. The best advice seems to be that the mother should wait until she, herself, feels ready [43]. In our study, early (23-30 weeks) compared with late stillbirth (> 30 weeks) was not significantly associated with anxiety and/or depression in the subsequent pregnancy. However, the p-value was just slightly above the significance level. While the duration of the pregnancy could be relevant for the risk of psychological distress after a loss [19], this is probably of diminishing importance in pregnancies that have advanced beyond 22 gestational weeks. In accordance with findings by Rådestad et al. [23], we found that a previous stillbirth did not affect satisfaction with partner relationship. Relationship satisfaction decreased slightly in all study groups and the explanation may be that having a child is by itself associated with a decline in relationship satisfaction [37,46,47]. Conclusions Anxiety and depression were more prevalent in the pregnancy following stillbirth compared with women with previous live births or previously nulliparous women. However, the prevalence declined after the birth of a liveborn baby and was comparable to the reference groups by six to 18 months postpartum. After this time, depression and anxiety seemed to increase somewhat, particularly in the previous stillbirth group. Timing of the subsequent pregnancy after stillbirth was not associated with anxiety and depression in the third trimester and neither was gestational age at stillbirth. Having experienced stillbirth was not related to satisfaction with partner relationship in the subsequent pregnancy or after the birth of a live-born baby. Implications from these findings are 1) that health care professionals in prenatal care should routinely screen for symptoms of depression and anxiety among women pregnant after stillbirth and 2) when timing a subsequent pregnancy, couples should be guided by their individual needs, taking maternal age and medical considerations into account. Future research should evaluate the quality of care provided to reduce psychological distress in women pregnant after stillbirth. This field would also benefit from studies that take prior mental health problems into account and studies that focus on the psychological well-being of partners.
2018-01-28T21:59:44.959Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "db9a2ad28ffecdadb16ed18a86d4a87d704e83ce", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-018-1666-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db9a2ad28ffecdadb16ed18a86d4a87d704e83ce", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
210164870
pes2o/s2orc
v3-fos-license
Infinite-fold enhancement in communications capacity using pre-shared entanglement Pre-shared entanglement can significantly boost communication rates in the regime of high thermal noise, and a low-brightness transmitter. In this regime, the ratio between the entanglement-assisted capacity and the Holevo capacity, the maximum reliable-communication rate permitted by quantum mechanics without any pre-shared entanglement as a resource, is known to scale as $\log(1/N_S)$, where $N_S \ll 1$ is the mean transmitted photon number per mode. This is especially promising in enabling a large boost to radio-frequency communications in the weak-transmit-power regime, by exploiting pre-shared optical-frequency entanglement, e.g., distributed by the quantum internet. In this paper, we propose a structured design of a quantum transmitter and receiver that leverages continuous-variable pre-shared entanglement from a downconversion source, which can harness this purported infinite-fold capacity enhancement---a problem open for over a decade. Finally, the implication of this result to the breaking of the well-known {\em square-root law} for covert communications, with pre-shared entanglement assistance, is discussed. Introduction-There is much interest in recent years in architecting the quantum internet [1,2], a global network built using quantum repeaters [3,4] that can distribute entanglement at high rates among multiple distant users per application demands [5][6][7]. There are several well-known applications of shared entanglement, a new information currency: distributed quantum computing [8], secure communications with physics-based security [9], provably-secure access to quantum computers on the cloud [10], and entanglement-enhanced distributed sensors [11][12][13][14]. In this paper, we elucidate a system design for a yet-another high-impact application of shared entanglement: that of providing a large boost to classical (e.g., radio-frequency, or RF) communication rates. Transmission of electromagnetic (EM) waves in linear media, as in optical fiber, over the atmosphere or in vacuum, can be described as propagation of a set of mutually-orthogonal spatio-temporal-polarization modes over the single-mode lossy Bosonic channel N N B η , described by the Heisenberg evolutionâ out = √ ηâ in + √ 1 − ηâ E , where η ∈ (0, 1] is the modal (power) transmissivity, and the environmentâ E is excited in a zeromean thermal state of mean photon number per mode N B . Alice encodes classical information by modulating the state of theâ in modes, with the constraint of N S mean photons transmitted per mode. The quantum limit of the classical communication capacity, known as the Holevo capacity, in units of bits per mode, is given by: where N S ≡ ηN S +(1−η)N B is the mean photon number per theâ out mode at the channel's output received by Bob, and g(x) ≡ (1 + x) log(1 + x) − x log(x) is the von Neumann entropy of a zero-mean single-mode thermal state of mean photon number x [15,16] 1 . If Alice and Bob pre-share (unlimited amount of) entanglement as an additional resource, but operating under the same conditions as above-transmitting classical data over N N B η with a transmit photon number constraint of N S photons per mode-the capacity, in units of bits per mode, increases to the following [17][18][19][20][21][22]: (2) where C E is the entanglement assisted classical capacity of the quantum channel N N B η , and A ± = 1 2 (D−1±(N S − N S )), with D = (N S + N S + 1) 2 − 4ηN S (N S + 1). In the regime of a low-brightness transmitter (N S 1) and high thermal noise (N B 1) , which goes to infinity as N S → 0 [23]. The practical implication of this can be potentially revolutionary in RF communications, since the condition N B 1 is naturally satisfied at the longer center wavelengths characteristic of RF. Exploiting (optical frequency) pre-shared entanglement between Alice and Bob-distributed via a repeatered quantum internet-potentially an order of magnitude or more enhancement in classical communications rate is possible, depending upon the actual operational regime of loss, noise, and transmit power, compared to conventional RF communications that does not use preshared entanglement as a resource. See Supplementary Information for a more quantitative discussion on this. Despite the large capacity advantage attainable with pre-shared entanglement been known for decades, a structured transmitter-receiver design to harness this enhancement has eluded us. Continuous-variable (CV) superdense coding yields a factor-of-two capacity advantage in the noiseless case, but does not provide any advantage in the noisy regime [24]. It was recently shown that phase-only encoding on pre-shared two-mode squeezed vacuum states attains C E in the N S 1, N B 1 regime [23], but with a receiver measurement that does not translate readily to a structured optical design. Receivers based on optical parametric amplification (OPA) [25] and sum-frequency-generation (SFG) [26] only provide at most a factor-of-2 improvement over C, as shown in [23] and the Supplementary Information. In this paper, we take an important step towards solving this long-standing open problem. We combine insights from the SFG receiver proposed for a quantum illumination radar [26], and the Green Machine (GM) receiver proposed for attaining superadditive communication capacity with phase modulation of coherent states [27] 2 , to obtain a transmitter-modulation-codereceiver structured design that saturates the ln(1/N S ) scaling in capacity gain over the Holevo capacity in (3). Joint detection receiver design-Let us consider the transmitter-receiver structure sketched in Fig. 1. Alice employs a binary phase shift keying (BPSK) modulation with a Hadamard code of order n. Let us assume n is a power of 2 such that a Hadamard code exists. A block of M temporal modes of the signal output of a pulsed spontaneous parametric downconversion (SPDC) source, an M -fold tensor product two-mode squeezed vacuum |ψ ⊗M SI , is modulated by one value of binary phase θ i ∈ {0, π}. The transmission of an entire BPSKmodulated Hadamard codeword consumes n SPDC signal pulses, modulated with phases θ i , 1 ≤ i ≤ n, consuming nM uses of the single-mode channel N N B η . The corresponding idler modes are losslessly pre-shared with Bob, e.g., using a fault-tolerant quantum network. Alice's phase modulation of the signal modes, followed by transmission of the signal modes through N N B η , turns into phase modulation of (classical) phase-sensitive cross correlations between Bob's received modes and (losslesslyheld) idler modes. This correlation bears the information in Alice's phase modulation through the lossy-noisy channel much stronger than any classical means, e.g., an amplitude-phase modulated coherent state. To translate phase modulation of phase-sensitive signal-idler cross correlations into modulation of (quadrature) field displacement, for which we have significant prior literature on receiver designs, e.g., for phase modulated coherent states, we employ SFG, a non-linear 2 Jet Propulsion Laboratory developed a decoding algorithm for the first-order length-n Reed Muller codes that employed the fast Hadamard transform in a specialized circuit that used (n log n)/2 symmetric buttery circuits, for sending images from Mars to the Earth as part of the Mariner 1969 Mission. This circuit came to be known as the Green Machine named after its JPL inventor. Guha developed an optical version of the Green Machine decoding circuit, replacing the butterfly elements by 50-50 beamsplitters, which he showed achieved superadditive communication capacity with Hadamard-coded coherent-state BPSK modulation, i.e., communication capacity in bits transmissible reliably per BPSK symbol that is fundamentally higher than that is physically permissible with any receiver that detects each BPSK modulated pulse one at a time [27]. This paper's joint detection receiver for entanglement assisted communications leverages insights from that optical Green Machine. , and M n idler modes held by Bob, entangled with Alice's transmitted modes. The pre-shared entanglement is shown using red (dash-dotted) lines. In an actual realization, only one n-mode Green Machine is needed, because the sum-frequency modesb optical process, which runs SPDC in reverse, per the the reduced Planck constant, and g the nonlinear interaction strength. Signal-idler photon pairs from the M input mode pairs are up-converted to a sum-frequency modeb, and the phase-sensitive crosscorrelations â SmâIm manifests as a (quadrature) displacement of a thermal state of theb mode [26]. Bob employs n feed-forward (FF) SFG modules-made by stacking K SFG stages, each of duration π/2 √ M g, and K beamsplitters and combiners of transmissivities κ = 1/K and 1 − κ respectively, as shown in Fig. 1to mix the nM modulated-received modes with the nM locally-held idler modes, pre-shared with Bob, entangled with Alice's signal modes. The reason for the K-stage SFG is that the bright noise background results in bright received modes, and that we wish the signal input of each SFG stage to have much less than a photon per mode, so that we can borrow the "qubit-approximation" analysis of the SFG from [26].b (i) k denotes the sum-frequency mode of the k-th SFG, 1 ≤ k ≤ K, of the i-th FF-SFG module, 1 ≤ i ≤ n. The sum-frequency outputsb (i) k , 1 ≤ i ≤ n from the K FF-SFG modules are input into an n-mode linear-optical Green Machine (GM) circuit GM k , each of which has n outputs that are each detected by single photon detectors [27]. An n-mode GM, as shown in the bottom right of Fig. 1, is a linear-optical circuit comprising n log 2 (n)/2 50-50 beasmplitters. It turns an n-mode BPSK-modulated coherent-state Hadamard codeword at its input into one of the n codewords of an order-n coherent-state pulse-position modulation (PPM) at its output. The electrical outputs of the i-th detectors from each of the K GM modules are classically combined into one output that is monitored for zero or more clicks, during each SPDC pulse interval. Since the K sum-frequency modesb (i) k , 1 ≤ k ≤ K in the i-th FF-SFG module come out in a temporal sequence, in reality we will only need one n-mode GM and n detectors. The diagram in Fig. 1 shows K GMs for ease of explanation. πN T e −(β−α) 2 /N T |β β|d 2 β as a single-mode thermal state with mean field amplitude α ∈ C. The photodetection statistics of this state is Laguerre-distributed [28]. The probability that this produces zero clicks when detected with an ideal photon detector, 0|ρ th (α, N T )|0 = (1/(N T + 1))e −|α| 2 /(N T +1) . In the κ 1/N B limit, for the k-th GM, the n input modes are in statesρ th (±α (k) , N T ), where the ± signs are governed by the specific Hadamard codeword that was used, [26]. Let us also define N k = |α (k) | 2 . One of the n output modes of the kth GM (which one, based on which Hadamard code was sent) is in a displaced thermal stateρ th ( √ n α (k) , N T ). We call this the "pulse-containing output" (mode). The remaining n − 1 output modes are in the zero-mean thermal stateρ th (0, N T ). At the n classically-combined detector outputs-produced by detecting one Hadamard codeword, i.e., M n received-idler mode pairs-we record a random binary n-vector of (no-click, click), i.e., 2 n possible outcomes. The 2 n click patterns are clubbed into n + 1 outcomes: a click in a given output and no clicks elsewhere, or an erasure, which refers to either zero clicks on all n outputs, or multiple clicks in any of the outputs. The modulation-code-receiver sequence described above induces an n-input n + 1-output discrete memoryless channel (DMC), which happens to be identical to the DMC induced by coherent-state pulse-position modulation (PPM) and single photon detectors with non-zero background (or, dark) click probability. The capacity of 5 . This shows that the capacity ratio scales as log(1/NS), which goes to infinity as NS → 0, for any given M . However, this scheme (BPSK modulation, Hadamard code, and our proposed structured joint-detection receiver) does not achieve CE. We have assumed η = 0.01 and NB = 10 photons per mode, for all the plots in this Figure. this channel [29], divided by (M n), is the bits per mode capacity attained by our modulation-code-receiver trio: where In the above formula, 1 − p c is the probability that the pulse-containing output of the receiver does not produce any clicks, and 1 − p b is the probability that any given non-pulse-containing output does not produce any clicks. Assuming the photodetection statistics of the ith outputs of each of the K GMs are statistically in- . This simplifies to: with A = nM κηN S (N S + 1)/(N T + 1). In Fig. 2, we plot the ratio C E /C as a function of N S in the N S 1 regime, for η = 0.01 and N B = 10. We plot the capacity ratios R (M,n) E /C, attained for M = 10 5 , and n ∈ 2, 2 2 , . . . , 2 20 . Let us define R to be the envelope of capacities attained by our scheme over all n, for a given M . In order to derive the asymptotic capacity scaling, we apply the conditions pertinent to our problem setting, and through a series of approximations, and leveraging analytical connections to noisy pulse-position modulation (PPM), we prove in the Supplementary Information: establishing that our modulation-code-receiver combination attains the optimal scaling of entanglement-assisted communications in the aforesaid regime, and despite not meeting C E , is in principle capable of harnessing the infinite-fold capacity enhancement possible using shared entanglement-using quantum optical states, processes and detection schemes that are readily realizable. Further, this capacity ratio is clearly larger than 2, the best achievable ratio with an OPA receiver [25] or an FF-SFG receiver [23,26] (See Supplementary Information). Covert communications-An operational regime that justifies the N B 1 assumption, required for the log(1/N S ) entanglement-assisted capacity-ratio gain, is radio-frequency (RF), or microwave domain, signaling. Furthermore, aside from practical constraints of the peak source power and high losses, e.g., which may occur in deep turbulent atmospheric propagation or long-range deep-space communications, one obvious regime where N S 1 would be applicable is covert or provably undetectable communications. Pre-shared entanglement, e.g., distributed at optical frequencies by a future satellite network or the quantum internet, could be leveraged to enhance-potentially by an order of magnitude or more-the amount of information that an RF communication link could transmit provably covertly, i.e., ensuring that the transmission attempt is undiscoverable even by an all-powerful quantum-equipped adversary. For provably covert communications, regardless of whether Alice and Bob employ entanglement assistance or not, the mean transmitted photon number per mode N S must sat- where m is the total number of transmitted modes, and δ quantifies how stringent Alice and Bob are on being covert. The above condition on N S comes from Alice and Bob setting a requirement that the adversary's probability of error P e , in detecting their transmission attempt must satisfy, 1/2 ≥ P e ≥ 1/2 − δ. This dependence of N S on m ultimately leads to the square-root law of covert communications, i.e., O( √ m), but no more, bits can be transmitted reliably yet covertly [31,32]. Both the OPA and the FF-SFG receivers achieve up to a factor of 2 enhancement over C (see Supplementary Information, and [23]). Hence, covert communications using either of those receivers will obey the square-root law, albeit with a factor of 2 enhancement in the scaling constant. Our scheme in Fig. 1 can achieve a factor of log(1/N S ) capacity enhancement, in the N S 1, N B 1 regime. This will translate to being able to transmit O( √ m log m) bits of information reliably and covertly, thereby breaking the square-root law of covert communications (by leveraging pre-shared entanglement). However, a more careful analysis of this is in order: both to find the constant in the aforesaid scaling, and more importantly to prove a rigorous converse result to provably-covert entanglement-assisted communications. We leave such an analysis of our joint-detection receiver in the covert communication regime, for future work. Practical considerations and discussion-For the assumed values of η = 0.01 and N B = 10 photons per mode, the highest capacity achieved by the joint-receiver receiver discussed above, occurs at around M ∼ 10 5 . A more detailed discussion of why there is an optimal modulation-block length M is discussed in the Supplementary Information. For a typical SPDC entanglement source of optical bandwidth W ∼ THz, With M ∼ W T , M = 10 5 translates to a pulse duration T ∼ 100 ns. This means the BPSK phase-modulation bandwidth necessary would be ∼ 10 MHz, which is readily realizable with commercial-grade electro-optical modulators at 1550 nm. In order to bridge the remainder of the gap to C E , better codes and more complex quantum joint detection receivers will be needed, based on arguments closely aligned with those in [30]. We believe that the capacity achieved by the receiver in Fig. 1 can be improved by adopting an FF scheme to make use of the extra modesê m k 's in Fig. 1, which was crucial for the optimality of the FF-SFG receiver for quantum radar [26]. Further improvement is possible via leveraging insights from a quantum joint-detection receiver for classical optical communications [33] which combines the GM and the Dolinar receiver [34]. This improved scheme would modulate Mmode SPDC pulses using a BPSK first-order Reed-Muller code, but now FF-SFG modules will be sandwiched by non-zero-squeezing two-mode-squeezing stages as in [26], and the detectors at the output of the GM stages will feed back into setting the aforesaid squeezing amplitudes, adaptively. We leave this calculation for future work. It should be obvious that we could have instead used a PPM modulation format, instead of BPSK Hadamard codewords followed by the GM stages, and achieved the same capacity performance. In such a scheme, Alice and Bob would need to pre-share (brighter) SPDC signalidler mode pairs of mean photon number per mode nN S , and Alice would send an M -temporal mode signal pulse (of mean photon number nN S ) and nothing (vacuum) in n − 1 pulse slots. So, only M modes will be excited out of each M n transmitted modes. FF-SFG stages will be used to demodulate, as before, but no GM stages will be needed. Since the optimal PPM order is n ∼ (E log(1/E)) −1 with E = M ηN S /(2N B ) (see Supplementary Information), which translates to nN S ∼ N0 log(N0/N S ) with N 0 = 2N B /(M η). For the numbers in Fig. 2, i.e., η = 0.01, N B = 10, M = 10 5 , we get N 0 = 0.2. This implies that for N S < 0.01, we get nN S 0.07. Thus the idler pulses are still in the regime that the implicit "qubit approximation" analysis of the SFG borrowed from [26] is valid. We relegate a slightly more detailed discussion of PPM and on-off-keying (OOK) modulation for entanglement-assisted communications, to the Supplementary Information. There, we also discuss pros and cons of the BPSK modulation described in this paper, and PPM or OOK modulation, both with regards to the requirements on shared entanglement, and the complexity of the receiver. It should be further noted that the PPM modulation format in the context of entanglement-assisted communications as described above, was proposed for entanglement-assisted communication over a general quantum channel over a finite-dimensional Hilbert space [35]. This technique has been termed "position based encoding" in the quantum information theory literature [36]. However, there is no simple translation known as yet of the receiver measurement that must be employed to achieve C E with position-based encoding, into a structured optical receiver. It will be interesting, in future work, to find a structured optical receiver design that achieves the full entanglement-assisted capacity C E afforded by quantum mechanics. A final point worth noting: pre-shared entanglement affords a large capacity enhancement in the regime of low transmitted signal power per mode and high thermalnoise mean photon number per mode, despite that entanglement does not survive propagation through this (entanglement-breaking) channel. It is this exact same regime where an entangled-state transmitter was shown to attain a superior performance compared to any classical source, for detecting a target at stand-off rangea concept termed quantum illumination [25,26,37]. These two observations are intimately related. These are both tasks that involve extracting information modulated into one half of a two-mode-entangled state where the information-bearing half undergoes propagation over an entanglement-breaking channel. Acknowledgments Supplementary Information Appendix A: Bit rate scaling in the low photon number regime The purpose of this appendix is to show that in the relevant regime of operation of the entanglement-assisted communication system described in the main paper, i.e., the scaling of the ratio between the rate achieved by our proposed joint detection receiver and the Holevo capacity R (M ) E /C matches the ratio between the entanglementassisted capacity and the Holevo capacity C E /C: where N S is the mean transmitted photon number per mode, and M is the length (number of modes) of the modulated SPDC signal pulse. We first derive C E /C. Entanglement assisted capacity enhancement Intuitively, the scaling C E C ∼ log 1 N S in (A1) follows from the dominant term in the expression for C E as N S → 0 being −N S log N S for any constant N B > 0, while the Taylor series expansion of C at N S = 0 yielding C = N S log 1 + ((1 − η)N B ) −1 + o(N S ). Formally, one can use L'Hôpital's rule to obtain the following limit: which yields the scaling. Note that the right hand side (RHS) of (A3) is zero when N B = 0, corresponding to the fact that the ratio C E /C ≤ 2 in the noiseless regime. The plot of the ratio C E /C as a function of N S and N B , for channel transmissivity η = 0.01 in Fig. 3 yields further insight. At optical frequencies, the Planck-Law limited thermal-noise mean photon number per mode N B ranges between 10 −5 to 10 −6 . At such small N B values, despite the scaling in (A2), the actual capacity ratio is essentially at or below 2 (the maximum value when N B = 0) over the entire range of chosen values of N S , 10 −6 to 10 2 . The ratio would be significantly large only for the extremely small values of N S that are not physically meaningful. However, at N B = 100, which is quite reasonable at microwave wavelengths, C E /C exceeds 10. Proof of optimal capacity scaling achieved by the joint detection receiver Consider the use of an order n pulse position modulation (PPM) scheme over a channel with loss and noise. PPM encodes information by the position of a pulse (e.g., a coherent state of light) in one of n orthogonal modes (e.g., time bins) at the input, which is direct-detected at the output (e.g., by a single photon detector). Loss attenuates the transmitted pulse amplitude, and noise results in potential detection events in one or more bins. Ignoring detection events in multiple bins (i.e., treating them as "erasures"), and assuming an equiprobable selection over the n inputs (which maximizes the throughput), the Shannon mutual information-expressed in bits per mode-of the n-input n + 1-output discrete memoryless channel (DMC) induced, is given by [29,Eq. (16)]: where p e is the probability of the detection event occurring exclusively in the bin corresponding to the position of the pulse at the input, and p d is the probability that a detection event occurs in a single bin that is different from the one containing the input pulse. Denoting by p c the probability of a detection event in the bin corresponding to the input pulse and by p b the probability of a detection event in another bin [29,Sec. IV], We specialize the result from [29] to find the channel capacity of the DMC induced by the modulation-codechannel-receiver combination described in Fig. 1 of the main paper. Let us recall that the scheme involves BPSK-modulation of the signal modes of M pre-shared two-mode-squeezed-vacuum (TMSV) states, repeating the above n times, encoding an order-n binary Hadamard code, and transmission of the M n modulated modes over M n uses of the single-mode lossy-noisy bosonic channel N N B η , followed by demodulation and detection by our joint detection receiver (JDR). This scheme results in detection events that are statistically identical to demodulating PPM in the presence of noise. Thus, we seek: where we determine p e and p d as follows. First, let's recall the definitions. The mean number of photons per mode in the signal modes of the TMSV transmitted by Alice is N S , and the mean photon number of the thermal noise background per transmitted mode is N B . The modal power transmissivity of the bosonic channel is η ∈ (0, 1], which implies that Bob's received mean number of photons per mode is To calculate p c and p b , we assume the photodetection statistics of the i-th outputs of each of the K Green Machines in the JDR are statistically independent, and . Thus, we have: with A = nM ηN S (N S +1) . Using the conditions: we can make the following approximations using the limits as N S → 0 and K → ∞: These lead to the following approximations for p c and p b : where γ = 1 − e −2(1+(1−η)N B ) . Substitution of approximations in (A14) and (A15) into (A5) and (A6) yields: where we assume n 1 so that n − 1 ≈ n for the approximation in (A17). When N S → 0, we can approximate p e and p d by the Taylor series expansions at N S = 0 of (A17) and (A18), respectively: Substituting (A19) and (A20) into the last two terms of (A4), and approximating n−1 n ≈ 1, reveals that only the first term of (A4) has a significant dependence on n in our regime of interest. Thus, for the optimal order, we need: The linear approximation in (A20) is insufficient to find n * . We follow the methodology in [29] by substituting in (A21) the quadratic Taylor series expansion at N S = 0, . This reduces the problem in (A21) to finding the location of the extremal values of f (n) = (u + vn) ln n by solving for n, which involves the principal branch of the Lambert W -function [39,Sec. 4.13]: where W (xe x ) = x for x ≥ −1. Substituting (A19) and (A25) into (A7), we obtain: where g(x) = (x + 1) log(x + 1) − x log x. As N S → 0, the logarithmic term dominates (A26), and we obtain the scaling: 3. Connection with PPM where dark-click rate is proportional to mean energy per slot In this subsection, we will consider a cruder approximation of R (M ) E , providing an alternative proof of the scaling in (A27), but one that lets us establish a connection with a problem that was studied by Wang and Wornell in the context of coherent-state PPM modulation, where the dark click probability per mode λ is propor-tional to the mean photon number per mode E [38]. Recall that R is the envelope of capacities attained by our scheme over all n, for a given M . Applying the conditions pertinent to our problem setting, κN S N S 1 N B 1/κ, we get N S → N B , 1/(1+N T ) K → e −N S N B and A/(1−µ) → nM ηN S /2N B , which lead to the following simplified asymptotic expressions: 1 − p c ≈ e −(nE+λ) , and 1 − p b ≈ e −λ , λ = cE, with E = M ηN S /(2N B ) and c = 2N B 2 /(M η) a constant. This is exactly the setting of n-mode coherent-state PPM modulation and direct detection, where the dark click probability per mode λ is proportional to the mean photon number per mode E [38]. The leading-order terms of the optimal capacity for this setting, in the regime of E 1, is given by: with the optimal PPM order, n = (E log(1/E)) −1 [38]. Applying this result to our problem, we get to leading order. In the same regime as above, κN S N S 1 N B , the leading order term for the Holevo capacity (attained using coherent states and Gaussian amplitude-and-phase modulation), C ≈ ηN S /N B , and that of the entangled-assisted capacity (achieved via an SPDC transmitter and phase-only modulation), C E ≈ (ηN S /N B ) log(1/N S ) [23]. It therefore follows that, proving that our transmitter-receiver structure attains the optimal capacity scaling. Numerical comparisons In Fig. 4, we compare the two approximations for R (M ) E : the one we obtained by modifying the Jarzyna-Banaszek analysis of PPM applied to our problem, shown in Eq. (A26), and the one we obtained from the Wang-Wornell PPM analysis, shown in Eq. (A29). It is seen that the former, our approximation, is closer to the true envelope, especially for smaller values of M . For a typical SPDC entanglement source of optical bandwidth W ∼ 1 THz, and M ≈ W T , M = 10 5 modes in a signal pulse translates to a pulse duration of T ∼ 100 ns. This means the BPSK phase-modulation bandwidth necessary would be ∼ 10 MHz, which is readily realizable with commercial-grade electro-optical modulators at 1550 nm. 6. Potential improvements in joint detection receivers for future work In the main paper, we discuss a few ideas for improved receiver performance for entanglement assisted communications, including better codes (e.g., Reed Muller codes, along the lines of [33]) and exploiting detection of the "noise modes" in the SFG stages, along the lines of [26]. In addition, we would like to note that as N B − > ∞, E /(C ln N S ) → 1/2, which indicates a possible check, to see if the entanglementassisted capacity attained by an improved receiver design improves the aforesaid ratio from 1/2 to 1. Appendix B: OPA receiver analysis In the low photon number regime (N S 1) the communication capacities are well-approximated by the Taylor series expansion around N S = 0. For example, the Holevo capacity C(η, N S , N B ) is: (B1) Here we derive the Taylor series expansion of the entanglement-assisted communication capacity with an SPDC source, BPSK modulation, and the OPA receiver [25] of gain G. We use it to evaluate the entanglement-assisted capacity gain achieved by an OPA receiver over the Holevo capacity. This channel's capacity is the classical mutual information between the random binary phase input θ ∈ {0, π}, P (θ = 0) = q, modulating the block of M transmitted symbols (i.e., M -fold tensor product of TMSV states) and the photon-count output N of Bob's detector, optimized over the probability distribution of the input defined by q: The probability that the photon counter records k photons over M modes is: When phase θ is transmitted, the mean received photon number per mode is: where N S is the mean photon number in each signal and idler mode, N B is the mean thermal noise injected by the environment, η is the channel transmissivity, N S ≡ ηN S + (1 − η)N B + 1, G is the gain of the OPA, and C p ≡ ηN S (N S + 1). The Taylor series of mutual information I(θ; N ) at N S = 0 is: Substitution of (B3) and evaluation of Q θ (k, N S ) N S =0 by taking the limit lim N S →0 Q θ (k, N S ) yields: where N B ≡ 1 + (1 − η)N B . Well-known results for the moments of binomial distribution are used to evaluate the sum in (B6). Maximizing over q yields: The maximum gain from using the SPDC source, BPSK modulation and the OPA receiver over the Holevo capacity when N S 1 and N B 0 is thus: where lim G ↓ 1 indicates a one-sided limit taken from above, and we normalize the denominator by M to account for employing block encoding of M symbols. We note that, with such normalization, the gain does not depend on M . There is also no dependence on the transmissivity η. Appendix C: PPM and OOK modulation for Entanglement-Assisted Communications In this Appendix, we will discuss alternative modulation formats for entanglement-assisted communications, which also leverage continuous-variable SPDC-based preshared entanglement, and can also achieve the log(1/N S ) capacity-ratio improvement over the Holevo capacity. One alternative to the aforesaid scheme described in the main paper is for Alice to directly modulate PPM codewords. In such a scheme, Alice and Bob would need to pre-share (brighter) SPDC signal-idler mode pairs of mean photon number per mode nN S , and Alice would send an M -temporal mode signal pulse (of mean photon number nN S ) and nothing (vacuum) in n − 1 pulse slots. So, only M modes will be occupied by signal pulses out of each M n transmitted modes. FF-SFG stages will be used to demodulate, as before, but no GM stages will be used. The state of the nK output modes of the n K-stage FF-SFG modules will be identical to the above: One block of K modes carries displaced thermal statesρ th ( √ nN k , N T ), and the remainder n−1 of the Kmode blocks will be excited in zero-mean thermal stateŝ ρ th (0, N T ). The mean transmit photon number of both schemes are identical. The DMC induced by the modulationcode-receiver combination for both schemes are identical. Hence, the capacity achieved by the two schemes are identical. The optimal PPM order for the second scheme is the optimal Hadamard-code length for the first scheme. That optimal PPM-order (or Hadamard code length) is given by: n ∼ (E log(1/E)) −1 with E = M ηN S /(2N B ), which translates to nN S ∼ N0 log(N0/N S ) with N 0 = 2N B /(M η). For the numbers in Fig. 5, i.e., η = 0.01, N B = 10, M = 10 4 , we get N 0 = 0.2, and optimal n ≈ 7. This implies that that for N S < 0.01, nN S 0.07. This means that the idler pulses are still in the regime that the implicit "qubit approximation" analysis of the SFG borrowed from [26] is valid. There are key operational differences however, between the two schemes, which are described below: 1. Peak power usage-Even though the mean photon number that is transmitted over the channel is
2020-01-12T14:05:12.000Z
2020-01-12T00:00:00.000
{ "year": 2020, "sha1": "6dfc7404a303c75110ba3530c4ca02da6f0647e9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2001.03934", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6dfc7404a303c75110ba3530c4ca02da6f0647e9", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
118428897
pes2o/s2orc
v3-fos-license
Diffusive chaos in navigation satellites orbits The navigation satellite constellations in medium-Earth orbit exist in a background of third-body secular resonances stemming from the perturbing gravitational effects of the Moon and the Sun. The resulting chaotic motions, emanating from the overlapping of neighboring resonant harmonics, induce especially strong perturbations on the orbital eccentricity, which can be transported to large values, thereby increasing the collision risk to the constellations and possibly leading to a proliferation of space debris. We show here that this transport is of a diffusive nature and we present representative diffusion maps that are useful in obtaining a global comprehension of the dynamical structure of the navigation satellite orbits. Introduction The past several years have seen a renewed interest in the dynamics of medium-Earth orbits (MEOs), the region of the navigation satellites 1 , since the community has realized the inherent dangers imposed by space debris, meanwhile stimulating a deeper dynamical understanding of this multifrequency and variously perturbed environment. The effects of the Moon and the Sun on Earth-orbiting satellites, often negligible on short timescales, may have profound consequences on the motion over longer periods; this accumulating effect is a phenomenon known as resonance. The inclined, nearly circular orbits of the navigation satellites are not excluded from this situation. Several, but mainly numerical, works [5,4,14] have quickly pointed out the key role played by the lunar and solar third-body resonances, especially on the orbital eccentricity. This instability manifests itself as an apparent chaotic growth of the eccentricity on decadal timescales, as illustrated by Fig. 1. Here, the orbits have been numerically integrated using an in-house, high-precision orbit propagation code, based on classical averaging formulations of the equations of motion-a well-known and efficient technique for treating long-term evolutions in celestial mechanics 2 . Using a first-oder variational stability indicator, the fast Lyapunov indicator (FLI) [7,8], these orbits have been declared a posteriori as chaotic and regular non-resonant. Figure 1. Typical eccentricity history for orbits in the MEO region: the orbit with a large variation has been declared as chaotic by the FLI analysis while the orbit with modest excursions is a regular non-resonant orbit. Note that the eccentricity can be transported to large values in the chaotic case. On the analytical point of view, it is regrettable to note that no real effort was hitherto made in the literature to guide the problem towards a global comprehension of the observed instabilities, capturing in the same time the (supposed) dynamical richness of the inclination-eccentricity (i − e) phase-space. The complexity of this perturbed dynamical environment, however, is now becoming more clear [13,3]. The present work summarize our latest results towards understanding the chaotic structures of the phase-space near the lunisolar resonances. In particular, we show that the transport properties of the eccentricity in the phase-space, due to chaos, are of a diffusive nature, and we present some results on the numerical estimates of the diffusion coefficient relevant to navigation satellite parameters, especially for the European Galileo constellation. The dynamics of MEOs We review the main features and the recent results that we have obtained for the dynamical description of the MEO region. This sections emphasizes ideas, but, for the sake of brevity, not the rigor of all the details involved. The quantity n denotes here the (mean) mean-motion. Short-periodic variations are only present in the ξ i , i = 1, · · · , 6, terms. Due to the fact that (x 1 , · · · ,x 5 ) are slow variables, large step size can be used to propagate numerically the dynamics, useful for long-term ephemeris calculations and predictions. 2.1. Overlap of the lunisolar secular resonancesà la Chirikov . The Hamiltonian system, written in canonical action-angles variables, is a small perturbation of an integrable system, namely of the Kepler two-body problem. Considering a mathematically simple, but physically relevant dynamical model, the Hamiltonian governing the dynamics is composed of the Keplerian part of the geopotential, the oblateness effect of the Earth, and the gravitational perturbations of third bodies, i.e., the Moon and the Sun: Explicit and detailed formulas for these terms can be found in several works [5,13,3]. The secular Hamiltonian, a 2.5 degree-of-freedom (DOF) system 3 , useful for describing the long-term dynamics, has been derived and reduced, treating the resonances in isolation, from Eq. (2.2) to the first fundamental model of resonances [1] (a pendulum) near each resonance by constructing suitable (canonical) resonant variables [3]. These lunisolar secular resonances involve a linear combination of 2 angles of the satellite, the argument of perigee ω and the ascending node Ω, combined with the ascending node of the Moon Ω M , which satisfy the resonant conditionσ The resonance centers C n are located in the action phase-space by the actions satisfying the equalityσ n = 0. Going back to the eccentricity-inclination variables, which are physically and geometrically more interpretable (especially to space engineers), it can be shown that condition (2.3) is equivalent to the relation f n,a (e, i) = 0, with a function parametrized by the the initial semi-major axis a , a free parameter of the problem 4 . The resonance centers C n , form a dense network of curves in the i-e phase-space. When a is receding from 3 to 5 Earth radii, sweeping the navigation constellation regime, the resonance curves began to intersect, indicating locations where several critical arguments σ n have vanishing frequencies simultaneously. Treating each resonances in isolation, and using the fundamental reduction to the pendulum, the amplitudes ∆ n of each resonance associated to the critical argument σ n have been estimated. The 'maximal excursion' curves in the i − e phase-space, delimiting the resonant domains, are then defined as We found a transition from a 'stability regime', where resonances are thin and well separated at a = 19, 000 km (∼ 3 Earth radii), to a 'Chirikov one' [2], where resonances overlap significantly at a = 29, 600 km (∼ 4.6 Earth radii), the initial 3 i.e., 2-DOF and non-autonomous. 4 In the secular version of the Hamiltonian, the canonical angle 'associated' to the semi-major axis is a cyclic variable, so that the semi-major axis is a first integral [3]. Lunisolar resonance centers C n (solid lines) and widths W ± n (transparent shapes) for a = 29, 600km, i.e., Galileo's nominal semi-major axis. This plot shows the overlap between the first resonant harmonics (|n i | ≤ 2, i = 1, · · · , 3). Galileo satellites are located near i = 56 • . semi-major axis of the European navigation constellation, Galileo, as illustrated in Fig. 2. This important structural and dynamical fact has been obscured for nearly 2 decades, despite the pioneering breakthroughs of T. Ely [5,6]. The analytical Chirikov resonance-overlap criterion that we applied was tested with respect to a detailed numerical FLI analysis of the phase-space, producing a stability atlas, a collection of FLI maps. The FLI analysis has confirmed the existence of the complex stochastic regime, whose effects on the dynamics is of primary importance [3]. 2.2. Transport in action space. Since the famous example of the asteroid Helga in Milani and Nobili's work [9], physical orbits in the Solar System can be much more stable than their characteristic Lyapunov times would suggest, a concept referred to as stable chaos. Thus, understanding the physical manifestation (the signature) of chaos on the system is preeminent. Rosengren et al. have recently demonstrated that the transport phenomenon acting in phase-space is intimately related to the resonant skeleton described by the centers C n [13], confirming Ely's original results [6], but on a much shorter timescale. They showed via a discretization of the dynamics (stroboscopic approaches) that the transport in the phase-space is mediated by the web-like structures of the secular resonance centers C n , allowing nearly circular orbits to become highly elliptic (as already illustrated by Fig. 1). This idea was further enlivened, taking advantages of the geometry and topology of the chaotic structures revealed by our FLI analysis. In fact, we showed that the long-term evolution of chaotic orbits superimposed on the background dynamical structures obtained via the FLIs tends to evolve in the chaotic sea, exploring consequently a large phase-space volume. This is contrarily to stable orbits whose excursion in eccentricity and inclination are much more modest, being confined by KAM curves. Thus, in addition to quantifying the local hyperbolicity, the FLI maps also reveal how the transport is mediated in the phase-space, revealing the preferential routes of transport [3]. Diffusive chaos Because of the analytical description that we achieved, Chirikov's diffusion, the diffusion of an orbit along a resonant chain (a consequence of the overlapping criterion [11]), was natural to suspect. In order to measure the value of the diffusion coefficient, we introduce the mean-squared displacement in eccentricity, the diffusion coefficient related to the eccentricity being defined as where ∆e(τ ) = e ti+τ − e ti and • is an average operator. We have computed the coefficient D e (τ ) in a purely numerical way following orbits for long timespans 5 using our precision orbit propagator. This type of averaging is called dynamic averaging 6, [12], and the coefficient D e is quantitatively defined as Figure 3 shows the evolution of the mean-squared displacement as a function of the length τ for the 2 orbits of Fig. 1. Firstly, it is legitimate to talk about diffusion: diffusive processes are commonly characterized by a power law relationship Having here that ν is very close to 1, we found a normal diffusion behavior for the chaotic orbit. We found also that the diffusion coefficient changes significantly depending on the orbit's nature; the slopes of the linear least-squares fit differ up to 6 orders of magnitude for the chaotic and the regular orbit, which appears here as flat. We extended the computation of D e for a particular domain of the i-e phasespace. Figures 4 shows the results of the computation in the MEO region, for physical parameters relevant to navigation satellites, covering a small domain of the phase-space; namely, the rectangle [0, 0.02] × [53 • , 57 • ], spaced uniformly by 185 × 160 initial conditions. The palette scale gives the magnitude of D e , indicating in which phase-space region the diffusivity is fastest. For the volume of the phase-space that we have explored up to a large, but finite time t f , all diffusion coefficients were finite, an indication that the motion does not spread more rapidly than diffusively. This lead to the important fact that typical navigation satellites obey reasonably well to a diffusion law. When KAM curves are approached, we found that the diffusion coefficients goes to zero sufficiently fast. Namely, by comparing the results of the diffusion maps with the FLI maps computed for the same physical parameters, we found a very nice agreement between the dynamical structures revealed either by D e or the FLIs, implying in general a 1-1 correspondence between high local hyperbolicity and high diffusivity 7 as shown in Fig. 4. It is important to note that even for moderate eccentricity, e ≤ 0.005, we may find high 5 More than 5 centuries, this timescale represents around 3.5 × 10 5 revolutions around the Earth. 6 The dynamic averaging differs from the spatial averaging, where the average operator is over some appropriate ensemble, but they gave the same results if the ergodic hypothesis holds. 7 A non-trivial result due to the existence of stable chaos. diffusivity regions, whose spatial organization in the phase-space is complex. We redo the same computation as that in Fig. 4, but we change the initial phases of the system (all others parameters are identical), as presented in Fig. 5. We can observe how the structures evolve by changing the initial angles, even if in both cases high diffusive orbits can be found. This observation illustrates in essence the difficult question of determining which initial phases of the initial state leads to a diffusive chaotic response of the system. This is intimately related to the representation of the dynamics in a reduced dimensional phase-space. Moreover, the diffusion coefficient calculated here give no information about which angles will ensure or avoid diffusive chaos for a fixed initial eccentricity and inclination. This point, of particular practical interest, undoubtedly needs further investigation. At the very least, diffusion maps as those presented here should be computed for an ensemble of initial phases 8 , but this point represent a difficult and formidable computational task. Let it be recalled that the Hamiltonian of Eq. (2.2) is 3-DOF and autonomous, the global understanding of such systems being on the cusp of current trends in dynamical systems research. Conclusions The overlapping of lunisolar secular resonances in MEO gives rise to complex chaotic dynamics affecting mainly the eccentricity. The local hyperbolicity associated to the resulting stochastic layer is synonymous to macroscopic transport in action space, with typical Lyapunov times on the order of decades. We have shown that these transport properties obey a diffusion law, and we have presented dynamical maps based on the numerical estimation of the diffusion coefficient. Our results show that we may find diffusive orbits even for moderate eccentricity near the operational inclinations of (and for physical parameters relevant to) the navigation satellites. Nonetheless, the number of degrees of freedom renders difficult the global comprehension of the tableau, as attested by the diffusion maps presented herein. The computational challenge emanating from this difficulty is surely a nice invitation to go beyond our communities, and to make considerable efforts to redesign the standard way by which 'stability maps' in general are traditionally generated in Celestial Mechanics. Significative improvements will arise certainly using adaptive algorithms and non-structured grids, based on clustering techniques [10].
2016-06-01T04:21:23.000Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "c52b1fc9c181bf81d3bc7e38657ff36b003b9a18", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.00106", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c52b1fc9c181bf81d3bc7e38657ff36b003b9a18", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
212979262
pes2o/s2orc
v3-fos-license
The Application of a Mixed Teaching Model to the Academic English Teaching for Graduate Students at Inner Mongolia University This paper explores the application of a mixed teaching model of Task-Based Learning (TBL), Collaborative-Inquiry Model (CIM) and MOOCs to the academic English teaching for graduate students at Inner Mongolia University, China. Teaching tasks of eight units are assigned for students to learn through the online course on MOOC in advance. Graduate students are grouped to work and cooperate with one another to complete the presenta-tions of their learning results. A five-year teaching practice indicates that 1) graduate students appear more active in class than students taught in traditional class; 2) the writing quality of abstracts is greatly improved through the cooperative learning pattern; 3) students show more satisfaction for the course than ever before regarding the teacher’s help for their academic activities. The application of the new teaching model has not only benefited the Inner Mongolia University (IMU) students, but also set up a pioneering ex-ample in teaching reform for graduate students and provided valuable experiences for other universities in ethnic areas. Introduction The English Syllabus for Non-English Major Postgraduates clearly points out that the teaching goal of graduate English is to enable students to master English as a tool for their own major learning, research and international exchanges. Postgraduates are to be cultivated to be those who are capable of writing and publishing English academic papers, participating in international academic conferences and exchanges and relatively smoothly reading the English literature and materials of the major (Ministry of Education of the People's Republic of China, 2007). This should be regarded as one of the most important criteria to evaluate the English teaching system and mode of postgraduates. In order to understand the needs of non-English major graduate students of Inner Mongolia University for degree courses, the teacher made an investigation by employing questionnaires and interviews. Through the interview and questionnaire survey of non-English major graduate students enrolled in 2014 and 2015 at Inner Mongolia University, it reveals that there are four major needs for learning academic English for non-English major graduate students: 96.33% of graduate students want to achieve the goal of reading English literature by learning the course; 83.49% expect to write academic writings in English; 63.3% hope to write emails and communicate with scholars in English; 52.29% of them anticipate using English for oral communication as their respective principal objectives for learning the course. According to the survey, 96.5% of graduate students did not receive any systematic training of academic English before entering the university; 98% of graduate students claimed that their only academic writing experience was writing their undergraduate papers for bachelor's degree; more than 85% of the students held the view that the biggest difficulty in English learning at the graduate stage was that they had no idea of the writing norms and discourse characteristics of English academic papers. The survey shows that graduate students are in urgent need of systematic guidance and training in reading and writing academic papers. Therefore, the cultivation of graduate students' academic English abilities should be at the core of graduate English teaching. Since the fall of 2015, in order to meet the immediate needs of graduate students, based on the teaching guidelines from the university and graduate college, and through the scrupulous arrangement of the Foreign Languages College of Inner Mongolia University, the teaching team for graduate students has changed the traditional English teaching that once focused solely on developing students' basic language skills, into the general academic English teaching for cultivation of academic English abilities. This shift happened partly as a result of the inspection tour surveying English teaching situations for graduate students in more than fifty "985" and "211" universities in China. Those well-known universities aim to cultivate international communication abilities of graduate students in academic English. However, graduate students in ethnic areas have weak English foundation, low enthusiasm and lack of confidence in English learning, all of which lead to the complexity and peculiarities of English teaching for graduate students in ethnic areas. Currently, traditional teacher-centered teaching mode is mostly adopted in postgraduate English teaching in ethnic areas, which focuses solely on developing students' basic language skills, without cultivating academic English abilities of graduate students. In view of this, the members of the teaching team for graduate students at Inner Mongolia University have updated and improved the previous curriculum and teaching methods, carried out a mixed teaching model of TBL, CIM and MOOCs and made the classroom teaching reform adjusting to the learning needs and actual English level of graduate students in ethnic areas, with the aim of better reading ability leading to better writing ability, and the teaching team believes that good group member activities are able to promote each member's performance in writing. Thus the teaching team could scientifically guide the graduate English teaching at Inner Mongolia University to be more academic, practical, advanced and innovative. Accordingly, a five-year reform of academic English courses for graduate students has been carried out at Inner Mongolia University, exploring effective ways to improve students' academic English abilities in practice, and remarkable results have been achieved. (Ellis, 2003). The mixed academic English teaching model is shown in Figure 1. General Academic English Teaching The reform and practice of English teaching for graduate students at our university involve students of every major and every college at the university. The characteristics and differences of arts and science disciplines have been taken into account. Therefore, the teaching team chooses the teaching materials and teaching methods commonly used in various disciplines, which are not based on the specific requirements of a particular discipline or major, but on the common applicability and all-purpose respect of academic English as a whole. The selected textbook of Inner Mongolia University is Reading and Writing for Research Papers published by Tsinghua University Press, which is divided into four topics and each topic has two units, including the understanding of plagiarism; the use of dictionaries; cross-cultural communication and EFL writing. The topic selection is closely related to students' English study and academic life, which can raise students' interest in reading research papers. Curriculum Design In our curriculum design, for the first two weeks, the teacher mainly focuses on teaching the overall ideas and methods of reading and writing research papers in class. According to the AIMRD (abstract, introduction, methods, results, discussion) paradigm commonly used in research papers of various disciplines, the teacher guides postgraduates to read and analyze the original research paper in this unit and gradually construct the basic elements of academic papers, and makes students understand and master academic norms, structure features and language features of academic discourses. The teacher motivates students to analyze the structure of one research paper, the author's point of view, the long and complex sentences, the language style and the characteristics of words and sentences in this academic article. In this way, students would get familiar with the argumentative language features of academic articles, build up their academic vocabulary, improve the efficiency of reading the academic articles, and grasp the topic sentences of the paragraphs and the logical relationship between them. Ultimately students could get important points of the article, distinguish the main ideas of different sections from their supporting details and read academic papers efficiently. When encountering long and difficult sentences, graduate students are instructed to first analyze the sentence structure, and then understand the main point that the whole sentence conveys and subsequently, the logical relationship between them. Finally, they are asked to translate those sentences based on understanding, and refine them into authentic Chinese ones. In the practice of students' English-Chinese translation, students' translations could be rather illogical in the target language. Sometimes, although they correctly understand the original text, they unconsciously follow the sentence structure of the English source text, resulting in the translation being quite different from the Chinese expression habits. The most direct cause is that students do not have a systemat-ic and clear understanding of the overall and specific differences between English and Chinese. First of all, the teacher systematically imparts to the students the knowledge of the overall and specific differences between English and Chinese, so as to raise students' awareness of the differences between English and Chinese. The teacher then gives students some knowledge of the characteristics of both English and Chinese while they do the translation. And meanwhile they could apply the common methods and techniques in translation. Through practice, students would be familiar with translation skills, and improve their translation abilities. In the end, they increase their understanding of English and Chinese culture and thinking. For the next fourteen weeks, the teacher first assigns teaching tasks of eight units for students to complete in a small group. The teaching of the unit includes the following important elements of a research paper: 1) abstract, 2) introduction, 3) methods, 4) results, 5) charts and data in the paper, 6) discussion, 7) references and annotations, 8) paraphrasing and summarizing. The teacher tells students that there is an online course "Academic Norms and Paper Writing", which could help them achieve their learning objectives if they learn it in advance through electronic devices including PC. The group members discuss and work with each other and complete the presentation of learning results through division of work and cooperation. The teacher mainly focuses on learning tasks and solving problems, and assists students in completing explorations of some problems. The teacher plays the role of guidance, supervision, management and evaluation. Secondly, the feedback evaluation of the group work is carried out in class, which is completed by teachers and students together. In the achievement exchange session, according to the report of each group, the teacher makes a quantitative evaluation of the group report and of each student's performance within the group. Students are asked to make assessments about each other's performance, and the teacher gives feedback. Questionnaire Survey of Teaching Situation Through the study of this course, 93.58% of the graduate students stated that they had got an overall understanding of academic English writing and the writing norms and discourse characteristics of academic papers in English, which was crucial for their reading English professional literature and writing academic papers. Through the study of this course, 89.91% said that they would consciously use the knowledge of the course to analyze the professional literature with what they had learned in class. 60.55% stated that while learning the course engaged them in undertaking tasks like giving presentations, they had found their problems and learned a lot from their classmates. 60.55% had done enough preparations and developed their language abilities in the presentation session. In order to accomplish the task assigned by the teacher, they had learned to grasp the topic sentence of each paragraph, the logical relationship between paragraphs. 58.55% stated after a period of study, they had mastered the reading skills of finding out the main points and supporting details of the academic writings. They could distinguish the argument from its supporting details, and figure out logical relations in the research paper. One student said. "It turns out that the article is so logical, which fosters my habit of deep reading and will be of great help to me in writing papers." General Academic English Writing Teaching According to the law of language learning, language input is the basis of language output, and high-quality language input lays the foundation for the writing of academic papers (Li, Wu, & Shi, 2017). The teaching of general academic English writing has combined reading and writing together, promoting writing abilities of graduate students through good reading, which, in turn, will eventually improve their writing abilities. In terms of language output, the teacher uses task-based learning strategy to guide students to master the writing norms and writing skills of English academic paper step by step. Students are required to work with team members to write an abstract for a research paper in English. In the cooperative learning of the group, the more competent learners lead and inspire other partners to work on the task and make common progress consequently. The teaching method of Collaborative-Inquiry Model can help graduate students solve the difficulties encountered in their learning to write abstracts in English. Since students come from different disciplines, when arranging classes, the team tries to put them of the same department, the same major or similar majors in the same class. Thus graduate students in one class can form study groups to learn more effectively. Interaction among the same team proved to be conducive to improving academic English writing abilities of graduate students. In order to test the effects of the shift from general English skills teaching to academic English teaching, abstract writing tests were conducted in the experimental class before and after teaching. In both tests, students were required to read an adapted English research paper and then write an abstract for it. A total of 60 students took part in two abstract writing tests. The test results show that the mixed teaching model of TBL and CIM and MOOCs can help students significantly improve their writing quality of abstracts through the cooperative learning pattern. In the pre-test, most students' abstracts are wordy, which makes the abstract lengthy, not concise at all. In the content, these abstracts contain too much less important information, which makes the abstract seem very redundant, and not direct and to the point of the research, ruining the overall impression an abstract leaves on readers. As for the structure, students do not know how to use IMRD structure of the research paper to organize abstract sentences. The abstract is a summary of the research paper, but students tend to take too many original sentences in the paper to put them together, which is not only poorly conveyed in meaning, but also results in poor coherence. When it comes to grammar, there appear various grammatical problems such as lack of a subject, wrong use of infinitives, and inconsistency between the subject and the predicate, which all affect the quality of an abstract. In contrast, after the group work, the text quality of the abstract is significantly higher than that of the pre-test text, being clear, logic, concise, cohesive, and practically without mistakes in tense and voice. The change of text quality before and after reflected students' own perception and mastery of the writing norms for abstracts of academic papers. The final outcome is the crystallization of the efforts of the whole group and, and it has also witnessed the progress of each group member. Teaching Effects The reform of general academic English teaching for graduate students at Inner Mongolia University has benefited more than 1000 students (both academic and professional masters) over five years' time since its beginning, and has been widely applauded by graduate students of the university. As the main driving force of the mixed teaching is students, the implementation effect is generally reflected by the feedback of graduate students after class and their self-assessment of learning effects and curriculum satisfaction (Spanjers et al., 2015;Rahman et al., 2015;Ekwunife-Orakwue & Teng, 2014;Owston et al., 2013). Therefore, we conducted a questionnaire survey on the graduate students who learned the course. 93.58% of the students think that by studying the course of academic English reading and writing, they have got a clear idea about academic English writing. This course could make them apply knowledge to practice, helping them tremendously with their reading and writing of academic papers. Students' confidence in reading English literature independently is raised, and so does their confidence in writing and publishing English research papers. According to the survey, 54.13% stated that they had realized the unique charm of the academic language and raised their interest and confidence in learning academic English well. 58.72% claimed that their confidence in communication in academic English had been greatly strengthened so that some of them even had the idea of studying abroad to learn more from English experts. It also had greatly increased the proportion of students applying for the master's and doctoral overseas programs sponsored by China Scholarship Council and sponsored by Inner Mongolia University as well, and attending international conferences during their studies. Three supervisors of students for the master's degree were selected randomly from different colleges for open interviews. In the interviews, the supervisors also fully affirmed the improvement of students' English literature reading ability and academic paper writing ability during the semester. The statistical results given by the university indicated that papers published in international journals by these graduate students who had learned the course have been greatly improved in the past five years, not only in quality but also in quantity. In these five years of the teaching reform, students' learning passion in Eng-lish is mobilized, and their learning abilities are developed in the process of using their own intellect. Students could use all kinds of mobile devices and PC terminals to obtain resources on the MOOC platform to study the related course in advance, and then make inquiries and carry out collaborative learning, which gives full play to the dominant role of students. We teachers as well as students profited from the Chinese University MOOC courses like "Academic Standardization and Paper Writing", which aims to cultivate students' academic English language skills, and the course "English for International Academic Exchange", which is intended to cultivate students' academic literacy and cross-cultural communication ability in academic English. In line with the university's concept of combining research work with teaching practice, the teachers of this team attach more importance to research on academic English teaching in the last five years. The results of teachers' research work and teaching practice helped to elevate the overall classroom teaching level. Teachers of the team endeavor to keep up with the latest teaching techniques and bring about rewarding results of the teaching reform for graduate students at the university. Both the teacher and students have accomplished the task of learning, and the only difference is that students benefit from each other, from the teacher while the teacher grows with students and gains insight into how to make students learn better and how to improve their own abilities to meet the challenge from thoughtful students. Students' learning through the online course on MOOC beforehand is an input, which could be done by making use of fragmented time, and the teacher must ensure the input of the learning content in advance on the part of students. If they learn in advance, they would willingly participate in class activities and sharing of what they've learned. Interactivities between the teacher and the students, among the students themselves are increased, and a cooperative learning atmosphere is created, and the students appeared more active in class than students of the traditional class that solely learned English language skills for all those years. Students set up a task for themselves to learn with questions or problems before the class time, which results in better learning sense and abilities. Conclusion After five years' teaching practice, the academic teaching system and curriculum at Inner Mongolia University have been gradually improved, and some effective strategies and methods have been put forward to improve the academic English abilities of postgraduates at our university. At present, among all the colleges and universities in Inner Mongolia Autonomous Region, only Inner Mongolia University has changed graduate students' public English course into academic English course, which will provide valuable experiences and inspirations for carrying out the reform of public graduate English education in other colleges and universities in ethnic areas. In the next five years, this course will further improve the timeliness of teaching with the help of modern teaching technology. With the help of the MOOC platform of Chinese universities, which provides rich and high-quality learning resources, an effective mixed academic English teaching model of TBL, CIM and MOOCs will be implemented better, which is more innovative, more advanced, and more challenging for both students and the teacher. The teaching team is still exploring a better mixed academic English teaching model of TBL, CIM and MOOCs. Problems occurred too in the implementation of the teaching model. The teaching team is not scrupulous in detailing the teaching process; some learning session could only come true in a very ideal condition; and there are always a few students lagging behind with their assigned tasks, which will definitely affect the whole group, even the learning process of the whole class. These are some of the problems we have encountered that call for attention and ways out. Only when the teachers are harboring the right teaching philosophy, can this collaborative, task-based teaching model mixed with MOOC resources meet students' higher demands for the development of their academic thinking and abilities, which will eventually shape the academic future of graduate students.
2020-02-20T09:12:01.776Z
2020-02-11T00:00:00.000
{ "year": 2020, "sha1": "2cbd9bf382b65f0f6aad2b298671a96c172f113e", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=98304", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e735708595972b74312ee548cc4ad9212af760f0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
245491472
pes2o/s2orc
v3-fos-license
Characterization of Prophages in Leuconostoc Derived from Kimchi and Genomic Analysis of the Induced Prophage in Leuconostoc lactis Leuconostoc has been used as a principal starter in natural kimchi fermentation, but limited research has been conducted on its phages. In this study, prophage distribution and characterization in kimchi-derived Leuconostoc strains were investigated, and phage induction was performed. Except for one strain, 16 Leuconostoc strains had at least one prophage region with questionable and incomplete regions, which comprised 0.5–6.0% of the bacterial genome. Based on major capsid protein analysis, ten intact prophages and an induced incomplete prophage of Leu. lactis CBA3626 belonged to the Siphoviridae family and were similar to Lc-Nu-like, sha1-like, phiMH1-like, and TPA_asm groups. Bacterial immunology genes, such as superinfection exclusion proteins and methylase, were found on several prophages. One prophage of Leu. lactis CBA3626 was induced using mitomycin C and was confirmed as belonging to the Siphoviridae family. Homology of the induced prophage with 21 reported prophages was not high (< 4%), and 47% identity was confirmed only with TPA_asm from Siphoviridae sp. isolate ct3pk4. Therefore, it is suggested that Leuconostoc from kimchi had diverse prophages with less than 6% genome proportion and some immunological genes. Interestingly, the induced prophage was very different from the reported prophages of other Leuconostoc species. cycles. Daily prokaryotic mortality of 20-50%, which may be a major source of the dissolved organic matter in nature, is estimated to originate from viral infections [15,16]. In addition to the virulent phages released after cell lysis, some phage genes are incorporated into the bacterial genomes by 10-20% as prophages, which are major contributors to the differences between bacteria, even within a species [17]. Through phage transduction, hosts often obtain foreign genes for resistance to environmental stresses and coexistence with phages [18,19]. Such events may affect bacterial ecology in terms of population changes in the microecosystem and contribute to the adaptation and evolution of microbial populations in natural environments [20]. Studies on phages during fermentation are required to determine whether they truly modulate kimchi fermentation or simply reflect the compositional changes in the bacterial community. Some studies on kimchiderived LAB phages have been performed; however, no studies on prophages present in the genomes have been conducted yet. Therefore, our aim in this study was to identify the prophage composition in kimchi-derived Leuconostoc genomes and compare them with other phages. Identifying and characterizing the phages of Leuconostoc, a major kimchi starter, might provide a better understanding of LAB ecology in the kimchi environment. Leuconostoc spp. Strains and Growth Conditions Eight bacterial strains of Leuconostoc were examined in this study (as shown in Table 1, with asterisks). The strains were inoculated at 1% (v/v) into de Man, Rogosa, and Sharpe (MRS) media (Oxoid, England) and cultured at 30°C for 24 h. Stock cultures were stored in 20% glycerol at −80°C. Prophage Identification The complete genome information of kimchi-derived Leuconostoc strains was downloaded from the Pathosystems Resource Integration Center (PATRIC) [21]. Based on the sequence data from PATRIC, prophageintegrated regions were analyzed using PHAge Search Tool Enhanced Release (PHASTER) [22]. PHASTER provides information on the completeness of the predicted phage-related regions according to the number of known genes/proteins contained in the bacterial prophage region: intact (>90%), questionable (90%-60%), and incomplete (<60%) regions. A prophage analysis tool, Prophage Hunter [23], was also used for further analysis of Leu. lactis CBA 3626. Phylogenetic Analysis The major capsid protein (MCP) sequences of intact Leuconostoc prophages and similar phages were aligned using ClustalW [24]. Phylogenetic trees were constructed using the neighbor-joining method of the MEGA7 software program [25]. Morphology and Phage-Encoded Resistance System Identification Superinfection exclusion (Sie) proteins were manually annotated as described previously [26]. Briefly, between the integrase and repressor of the prophages, proteins having one or more N-terminal transmembrane domains were predicted using the TMHMM Server, v. 2.0 [27] and protein adjacent to the metalloprotease and the metalloproteases were identified as Sie proteins. Methylase (MTase) proteins were predicted using BLASTp searches [28]. Prophage Induction and Validation Overnight cultures of Leu. lactis CBA 3626, Leu. citreum CBA 3621, and Leu. citreum CBA 3627 were inoculated at 1% (v/v) on fresh MRS broth and incubated at 30°C until an OD 600 reading of 0.2 was achieved. Then, mitomycin C (MitC) (Sigma-Aldrich, USA) was added to a final concentration of 0.2, 0.5, and 1 μg/ml [29]. MitC-treated culture and control (MitC non-treated) were grown and observed for 24 h, and the absorbance at OD 600 was measured every 2 h. Subsequently, the culture broth was centrifuged at 8,000 ×g at 4°C for 10 min, and the supernatant was filtered through a 0.22 μm filter (Millipore, USA). The filtered supernatants were concentrated through centrifugation at 26,000 ×g for 1 h. To confirm prophage induction, spotting assay and transmission electron microscopy (TEM) were performed. For the spotting assay, 100 μl of each Leuconostoc overnight culture was inoculated in 5 ml MRS soft agar (0.7% agar) and overlaid on MRS agar. Then, 10 μl of the concentrated supernatant was spotted on the lawn and incubated overnight at 30°C to observe the lysis zone [8]. To observe phage morphology using TEM, the concentrated supernatants were inoculated on a 200-mesh, carbon-coated copper grid (Ted Pella, USA) and stained with 2% uranyl acetate. The samples were observed using TEM (H-7600, Hitachi, Japan) at 80 kV [30]. To detect the induced phage using polymerase chain reaction (PCR), the primers for the MCPs of intact, incomplete, and questionable prophages of Leu. lactis CBA3626 were designed (as listed in Table S1). The primers for the MCP, endolysin, and tail proteins of the two fused, incomplete prophages of Leu. lactis CBA3626 are listed in Supplementary Table 2. The housekeeping gene glyceraldehyde 3-phosphate dehydrogenase was used as a control. Each concentrated supernatant was treated with DNase for 30 min at 37°C and inactivated at 75°C for 10 min to remove bacterial DNA. According to the manufacturer's protocol, 5 μl of the supernatant was used as the PCR template, and AccuPower Taq PCR PreMix (Bioneer, Korea) was added to a final volume of 20 μl. The PCR products were electrophoresed in 1.5% agarose to confirm the results. Comparative Genomics To compare the similarity of the induced prophage region of Leu. Lactis CBA 3262 (1391006-1428849) predicted using Prophage Hunter with other phages, BLASTn was used, and the phage genome annotation file with the highest query was downloaded from the NCBI database. Genome comparison was performed using the tblastx algorithm in the Easyfig 2.5.5 software [31] with a maximum E-value of 0.0001 and minimum identity value of 80% blast options. In Silico Analyses of Prophages in Leuconostoc Genomes Ten intact prophages and 24 prophage regions were identified using the PHASTER algorithm, and the genome sizes of the intact prophages ranged from 33.2 kb to 54.2 kb. Total prophage genomes accounted for 0.5 to 6% of the bacterial chromosome, which appeared to be lower compared to that of other bacterial genomes (10-20%) [17]. As examples, the phage genome of Escherichia coli O157:H7 strain is composed of 16%, and Streptococcus pyogenes contains 12% prophage genomes on the chromosome [32]. The prophage distributions on 17 kimchi-derived Leuconostoc strains with complete genomes reported in the CP046062 --PARIC database were analyzed using PHASTER. Prophage regions were identified as intact, questionable, and incomplete, according to the algorithm. Among the strains listed in Table 1, eight had 10 intact prophages (one to two prophages per strain), while 13 strains had questionable and/or incomplete prophage regions on the chromosomes. Except for L. mesenteroides J18, all strains had at least one prophage region, including questionable and incomplete prophage regions. Compared to kimchi-derived Lactobacillus, Leuconostoc had a relatively low number of prophage regions. Lac. brevis and Lac. plantarum strains contain up to four intact prophages [29,33]. Cases of prophages in cryptic states that were fixed in bacterial genomes were observed among the intact prophages. Although they could be excised, these prophages could not form active particles or lyse their hosts because of mutagenesis [34]. Therefore, using the NCBI database, the essential genes coding for the full functions of phages were identified. Most of the intact prophages had essential genes, such as genes for DNA replication, packaging, morphogenesis, lysis-lysogeny, and regulation/modification modules. Among the 10 intact prophages, four phages showed frameshift mutation or defect in the essential genes and were labeled as putative cryptic phages (Table 1). First, intact prophage 1 of Leu. citreum WiKim 0101 consisted of pseudogenes for the MCP, terminase large subunit, and tail protein, while endolysin was not detected in intact prophage 2. Second, the tail family protein in Leu. citreum wikim 0096 had frameshift mutation. Lastly, in Leu. mesenteroides WiKim 33, replisome organizer and endolysin were incomplete. Accordingly, these strains may not be fully assembled or induced. To further characterize the prophages in kimchi-derived Leuconostoc, the nucleotide sequences were aligned, and a phylogenetic tree based on MCPs was generated (Fig. 1). Eleven prophages, including the induced prophage region, belonged to the Siphoviridae family and were similar to Lc-Nu, sha1, phiMH1, and TPA_asm phages [35][36][37]. Except for TPA_asm, the three phages belonged to the HK97 family [38]. However, it was difficult to analyze homologies for other morphogenesis and packaging genes because there was no similarity among the phages. Dairy Leuconostoc lytic phages have been classified as members of the Siphoviridae family; however, some phages in sauerkraut fermentations have been identified as members of family Myoviridae [39,40]. The temperate phages isolated from Leuconostoc spp. in watery kimchi have also been reported as members of Myoviridae [41]. In this study, it is noteworthy that all intact prophages in Leuconostoc belonged to family Siphoviridae. Identification of Phage-Encoding Sie Proteins and MTase To invade the host bacteria and integrate successfully into the genome, phages are required to overcome and adapt to host anti-phage mechanisms, such as restriction-modification (RM) systems, CRISPR-Cas immune system, abortive infection, and toxin-antitoxin systems [42]. Bacteria have been reported to have DNA MTase that transfers a methyl group from S-adenosyl-L-methionine to a target nucleotide to protect the cell from invasion by foreign DNA [43]. Phages from diverse ecosystems integrate cognate MTase-encoding genes that have the advantage of permanently overcoming the host RM hurdle. In addition, Sie proteins on host genome prophages prevent infection and multiplication of other phages by blocking DNA integration, thereby protecting the host from newly incoming phages [44]. In this study, Sie proteins and MTase from intact prophages were predicted using BlastP and TMHMM. Five prophages were predicted to have MTase and Sie proteins ( Table 2). Only one intact prophage found on Leu. lactis CBA3625 had MTase, while the others did not harbor the gene. Meanwhile, the prophages of Leu. citreum CBA3621, Leu. citreum CBA3627, Leu. citreum WiKim 0096, and Leu. lactis CBA3626 encoded for Sie proteins. The presence of Sie proteins in the prophages might confer phage immunity to Leuconostoc strains over other phages, similar to Streptococcus thermophilus [45]. However, Sie and MTase genes on the prophages of Lac. plantarum showed high ratios among the strains by 80% and 50%, respectively [29]. Therefore, prophages that have these proteins may be strain-specific; thus Leuconostoc strains may have different characteristics in terms of evading other phages. Induction and Detection of the Leu. lactis Prophage Among the eight strains with intact prophages, those of Leu. citreum CBA3621, Leu. citreum CBA 3627, and Leu. lactis CBA3626 were induced. However, prophage induction of Leu. citreum CBA3621 and Leu. citreum CBA3627 was not confirmed using PCR or TEM in all MitC concentrations. Therefore, Leu. lactis CBA3626 was selected for prophage induction and was induced further with various chemical stresses (Fig. 2). First, 0.2 μg/ml of MitC was added when the culture reached to 0.2 by OD 600 . After 4 h, the bacterial growth curve was different from that of the negative control, and the supernatant was harvested at 24 h. Morphology was confirmed using TEM, and phage particles were observed. The induced phage morphology exhibited an approximately 60-61 nm icosahedral head and a 132-200-nm-long, non-constrictive tail, similar to the Siphoviridae family ( Fig. 2A). PCR amplification of the MCPs was performed to confirm which phage was induced among prophages. Unexpectedly, the MCP primers were not able to detect the intact prophages; however, two fused, incomplete prophages of Regions 4 (site 1387756-1411885) and 5 (site 1405961-1430068), approximately 42 Kbp, were detected using PHASTER analysis (Fig. 3). Another prophage prediction program of Prophage Hunter was used and suggested that the two fused, incomplete regions were one active assembly [23]. Meanwhile, the tail regions and endolysin proteins were detected using PCR (Fig. 4). Thus, induction of the prophage in Leu. lactis CBA3626, which might have originated from two fused regions of the incomplete prophages, was confirmed using MitC. However, the induced prophage could not confirm the plaque in any Leuconostoc strains, including the host. Induction using other chemical stressors, such as acetic acid, lactic acid, and hydrogen peroxide, was performed using the same method for MitC. Although the growth patterns were similar to those in MitC induction, induction of prophages was not confirmed using spotting assay, PCR, and TEM. This result suggests that the prophage of Leu. lactis CBA3626 could not be induced in the kimchi environment. Lactococcus phages were easily detected when the dairy starter strains were induced [46], whereas Leuconostoc phage was observed at a relatively lower frequency [47]. Thus, these results suggest that Leuconostoc might not be induced well compared to Lactococcus or other starter strains. Comparative Genomics Analysis of Leu. lactis Prophage Among the prophages of Leu. lactis CBA3626, two fused, incomplete prophages were induced using MitC and identified using PCR and TEM. Based on NCBI and BLASTp, genome comparison of the induced prophage with other Leuconostoc phages was performed using the representative phiMH1 [48]. Except TPA_asm, 22 reported phages on NCBI showed very low identity (< 4%) with the induced prophage. Only the TPA_asm phage derived from human metagenome research was similar to the induced phage and showed 47% homology. The structure, lysis, and packing modules were highly similar (> 84%) to the TPA_asm phage, but the genes involved in lysogen showed relatively low identities (Fig. 5). Contrarily, 11 Leuconostoc dairy bacteriophages were confirmed to have high similarity in morphology, replication, and packaging module [49]. Therefore, the induced prophage of Leu. lactis CBA3626 may be different from the reported Leuconostoc phages. The current data on the Leuconostoc phage genome are still lacking compared to Lactobacillus or Lactococcus phage genomes, so further research on Leuconostoc prophages should be conducted, which in turn could significantly affect the quality of the fermented kimchi.
2021-12-25T06:16:25.729Z
2021-12-23T00:00:00.000
{ "year": 2021, "sha1": "f97283c4619ebeee43ee9cc85cb5f06826b2c703", "oa_license": "CCBY", "oa_url": "https://www.jmb.or.kr/journal/download_pdf.php?doi=10.4014/jmb.2110.10046", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "10dbcea05e1230722c6c5ced5e9d86176153beb0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
222990178
pes2o/s2orc
v3-fos-license
Age-related changes in egg yolk composition between conventional and organic table eggs The aim of this study was to investigate fatty acids, fat-soluble vitamins, malondialdehyde and cholesterol in conventional and organic eggs obtained from hens of different ages (30 and 60 weeks). A total of 360 egg yolks were used in this study. Polyunsaturated fatty acid, omega-3, and omega-6 levels were higher in the organic eggs from the 30-week-old hens. The monounsaturated fatty acid level was higher in the conventional eggs but was the same between the two age groups. Cholesterol and vitamin A levels were not influenced by either the rearing system or the age of the hens. The malondialdehyde, vitamin D2, and vitamin K2 were higher in the organic eggs; however, vitamin E was higher in the conventional eggs. The results showed that the rearing system and age, as well as the diet, had an impact on the composition of the egg. Total levels of polyunsaturated fatty acid, omega-3, and omega-6 are higher in organic eggs produced by younger hens. Introduction Hen eggs are a good-quality and inexpensive food source with a moderate amount of calories (Miranda et al. 2015). Eggs comprise essential amino acids (Sarwar 1997) as well as some fatty acids, such as omega-3 (n-3) and omega-6 (n-6) from the family of polyunsaturated fatty acids (PUFAs), which have been determined to have beneficial effects (Cherian and Quezada 2016). PUFAs have important roles in sustaining physiological conditions, such as protecting cardiovascular and nervous systems. Some chronic diseases, such as cardiovascular, diabetes, cancer, obesity, autoimmune, rheumatoid arthritis, asthma, and depression are also related to the unbalanced intake of n-6 and n-3, especially when the amount of n-6>n-3 (Ristić-Medić et al. 2013). The health benefits of fatty acids have led to widespread research on fatty acid composition in animal products, leading to several attempts to enrich the n-3 content in animal products (Woods and Fearon 2009). These attempts are most common in the poultry industry, especially in egg production (Rymer and Givens 2005). These enriched eggs are also known as "functional eggs". Cholesterol is an amphipathic lipid that is an important component of cell membranes and myelin, and that is necessary for the synthesis of vitamin D; bile salts; and steroids, such as glucocorticoids, estrogen, testosterone, and progesterone (Hu et al. 2010, Orth andBellosta 2012). Chicken eggs are one of the food products that contain a high amount of cholesterol (200-300mg/100g); therefore, eggs have been deemed by health and nutrition experts for several years as a controversial food based on presumptions that they led to cardiovascular diseases. However, some studies have proved that egg consumption has limited effects on blood cholesterol and cardiovascular disease (Eilat-Adar et al. 2013, Li et al. 2013). Eggs also contain 18 vitamins, but their content depends on environmental factors, the hen's diet, laying hen strain, and the hen's age. The 18 vitamins, plus A, D, B2, B12, folate, biotin, pantothenic acid, and choline are those most commonly found in the egg (Miranda et al. 2015). In addition, fat-soluble vitamins E and K found in eggs, and vitamins A and E are known to be antioxidant agents that protect against lipid peroxidation in foods. These vitamins are often added to the hen's feed to both inhibit egg deterioration by prolonging the shelf life of the eggs and enrich the eggs with both vitamins (Mohiti-Asli et al. 2008, Shahryar et al. 2010, İlhan and Bülbül 2016. In addition, the main source of vitamin K in the hen's diet is green forage material (Bauernfeind and De Ritter 2018). Lipid oxidation is one of the main causes of shorter shelf-life for many foods, especially those containing fat. As a result of this oxidation, the three-carbon dialdehyde, malondialdehyde (MDA), is produced that can damage an organism's proteins and DNA. Although spectrophotometric measurements of MDA are widely used, high-performance liquid chromatography (HPLC) has better sensitivity and specificity for measuring MDA in foods (Papastergiadis et al. 2012). Manuscript received April 2020 The aims of organic farming are to protect natural resources and animal welfare using appropriate ecological methods, and consumers accept organic foods as good-quality products (Samman et al. 2009, Radu-Rusu et al. 2014. Quality, nutritional value, and safety are the most important factors that affect the food preferences of the consumer; however, there are limited studies comparing organic and conventional foods in point of food composition. The positive effects of organic feed on an animal's performance, health, and nutritional composition have been determined; however, feeding a strictly organic diet has significant challenges. It is vital that the organic farming sectors meet consumer demands in a sustainable, safe, and affordable manner, while maintaining consumer trust and confidence in the food supply chain (Kristiansen et al. 2006). Organically reared laying hens have been shown to produce more eggs , and several reports have suggested that organic rearing has some effects on the characteristics that affect egg quality. Organically fed laying hens produce heavier eggs with heavier yolks ; however, one study found that the weight of conventional and organic eggs were similar (Mugnai et al. 2009). It has been noted that organically fed rabbits have a lower mortality rate (Vogtmann 1998), and that organically fed rats and chickens have a higher immune capacity, than those conventionally reared (Lauridsen et al. 2005, Huber et al. 2010. The nutritional composition of animal products is influenced mainly by animal strain, diet, and age (Scheideler et al. 1998). In studies conducted on observing age-related changes in the fatty acid composition of the eggs, total PUFA, n-6, and n-3 are reportedly high in the eggs of young hens (Liu andLi 1996, Nielsen 1998); however, there is limited data on age-related changes in the levels of vitamins A, D, E, and K and of cholesterol and MDA in these eggs as well as those from conventional and organic rearing systems. The aim of the current study was to investigate the effect of conventional and organic rearing systems on some of the nutrients in egg yolks in Bovans White commercial layer hens. The present paper also deals with the study of the effect of hen age on the egg composition. Eggs sampling and experimental design The study was conducted at a commercial egg production company in Turkey. The research related to animals' use has complied with all the relevant national regulations and institutional policies for the care and use of animals (permission no: 05.11.2014/206). The study comprised 360 eggs divided into 4 groups (conventional cage and 30-week-old hens, conventional cage and 60-week-old, organically reared and 30-week-old, organically reared and 60-week-old) of 90 eggs each that were obtained from Bovans White commercial layer hybrid. The hens were provided feed and drinking water ad libitum and similar ingredients were used in the diets of the hens in both the conventional cage and organic systems. The feed was produced at the feed mill at the production company in accordance with National Research Council (NRC 1994) standards. The composition of the hens' diet is provided in Table 1. The eggs were randomly collected from the two systems, which included both 30 and 60-week-old hens. This study planned and conducted on the same rearing season to avoid any seasonal effects on both the system and age group. The conventional cages at the production company comprised four tiers, each 70 cm wide by 55 cm long by 50 cm high. In this system, the flock size was ~4 0000 hens and each tier housed 7 hens. Each cage unit had 3 nipple drinkers and a 70-cm-long (10 cm/bird) trough-type feeder. Manure was removed from the house using scraper belts. Artificial lighting was provided in the cages during the laying period. An 8-h light/16-h dark period was implemented for 25 weeks. The eggs were automatically collected using egg belts in this system. Organic egg production in the company is made according to specific legislation that was adopted from the directive of the European Union 99/74/EC. Each organic house was established on an enclosed litter-floor pen of 560 m 2 (5-6 bird/m 2 ) and an outdoor area of 12000 m 2 (4 m 2 per hen) for 3000 hens. The outdoor area was covered with natural vegetation. Pop-holes were placed along one side of the house to provide the hens with free access to the outdoor area on sunny days. The floor pen had 120 circular hanging feeders and 200 nipple drinkers. The lighting regime was similar to that of the conventional cage system; an 8-h light/16-h dark period was used in enclosed litter-floor pens. Sample processing for analysis Each egg was broken and each yolk was separated from the albumen. The yolks were rolled on filter paper to remove any albumen residues. From each system and age group, 9 egg yolks from each sample were pooled in 50-ml tubes and gently mixed. Ten samples were prepared for each analysis. Forty tubes were sealed and frozen to be analyzed. The yolks were extracted and processed according to Hara and Radin (1978). Two aliquots of 5 ml extract were transferred into sealed tubes and stored at -25 °C until for fatty acid composition; cholesterol; vitamins A, D, E, and K; and MDA analysis. Preparation of fatty acid methyl esters Fatty acid methyl esters (FAMEs) were prepared according to Christie (1989) by adding 5 ml 2% methanolic sulfuric acid solution to 5 ml extract and vortexed. This mixture was stored at 55 °C for 14-15 h for the methylation. The 5 ml 5% NaCl was added to the samples when they reached room temperature. FAMEs were extracted using n-hexane, and the hexane phase was transferred into new tubes to which 2% KHCO 3 was added. After 3 h (for completion of phase segregation), the samples were stored at 37 °C for 2 d. At the end of 2 d, 1 ml heptane was added to the samples and they were put into vials for analysis of fatty acid composition with gas chromatography (GC). Gas chromatography After the fatty acids were converted into methyl esters, they were analyzed using the Shimadzu GC-17A (Shimadzu Corporation, Kyoto, Japan). For the analysis, the SP TM -28 fused silica capillary column (Sigma-Aldrich Co. LLC, Taufkirchen, Germany) with a film 0.2 μm thick, 0.25 mm in diameter, and 30 m long was used. During the analysis, the column temperature was kept at 120-220 °C, the injection temperature at 240 °C, and the detector temperature at 280 °C. The temperature was increased by 5 °C min -1 until it reached 200 °C and by 4 °C min -1 from 200 °C to 220 °C. Nitrogen was used as carrier gas. A standard FAME mixture was injected to determine the retention time of each fatty acid in the samples. The values of fatty acids are given as percentages of total fatty acids (Tvrzická et al. 2002). HPLC analyses of cholesterol, vitamins A, D, E, K and MDA The 10% methanolic potassium hydroxide solution was used for the extraction of the samples. All samples were equalized with n-hexane for the extraction of non-saponified lipophilic molecules. After this process, 1 ml acetonitrile/methanol was added to the residue and the solution transferred into autosampler vials for analysis. A Shimadzu VP series HPLC instrument (Shimadzu Corporation) was used to conduct the analyses with LC Solution. The detection wavelength for vitamin A was at 326 nm (Sánchez-Machado et al. 2002), and that for cholesterol and vitamins D, E, K was set at 202 nm (Lopez-Cervantes et al. 2006). The values for cholesterol and vitamins D, E, and K are presented as mg g -1 ; that for vitamin A is presented as μg g -1 . HPLC analyses of MDA were conducted according to Karatas et al. (2002). The 1 ml of supernatant from each sample was analyzed using the Shimadzu VP series full-automatic HPLC system (Shimadzu Corporation). The DAD detector and a column of ODS-3 (15 × 4.6 cm, 5 μm) were used for this analysis at a wavelength of 244 nm. MDA standards (Sigma-Aldrich) were prepared to total 2.92 μg ml -1 , and the MDA levels of the samples were calculated as nmol g -1 . Statistical analysis The 2×2 factorial design was used with a General Linear Model and the rearing systems (conventional cage and organic) and age groups (30 and 60 weeks) were the main effects. SPSS 22 (IBM Corp., Armonk NY, USA) was used for the analyses. The data are represented as the mean and standard error of the mean (SEM). p≤0.05 was considered statistically significant (Collins et al. 2009). Results Fatty acid composition: cholesterol and vitamins; A (retinol), D (D2), E (α and δ tocopherols), and K (K2) of the feed samples are presented in Tables 2 and 3, respectively. The saturated fatty acid (SFA) levels in the egg yolks are shown in Table 4. Palmitic acid (C 16:0) and stearic acid (C 18:0) were determined to be high in the yolks with significant differences in these between rearing systems and hen age (p<0.05). Margaric acid (C 17:0) was affected only by age and was higher in the eggs from hens 30 weeks old (0.37%) than in those from hens 60 weeks old (0.15%). Myristic (C 14:0) and behenic acids (C 22:0) were not influenced by either the rearing system and hen age and were not significantly different (p>0.05). The level of monounsaturated fatty acids (MUFAs) in the egg yolks are presented in Table 5, with oleic acid (C 18:1n-9cis) being one of the higher MUFAs found in the samples. Nervonic acid (C 24:1n-9) was influenced by both rearing system and age and was found to be higher in eggs from the conventional system (0.92%) than in those from the organic system (0.47%) but lower in eggs from the younger hens (0.55%) than in those from the older hens (0.83%). This was a significant difference (p<0.001). Palmitoleic acid (C 16:1n-7), oleic acid (C 18:1n-9cis), and elaidic acid (C 18:1n-9trans) levels were similar among the groups (p>0.05). Discussion The results of this study, similar to those of previous studies, showed that palmitic, oleic, and linoleic acids were the most abundant fatty acids in eggs (Hidalgo et al. 2008, Samman et al. 2009, Stanišić et al. 2015. SFA levels were similar in eggs produced by both organic and conventional production systems, which is in agreement with the results of Cherian et al. (2002). In the present study, the levels of MUFA and PUFA were different according to the rearing system. Higher levels of PUFA and n-3 were produced in eggs from organically reared hens fed grass as a green forage material in the outdoor area (Lopez-Bote et al. 1998, Hammershoj andJohansen 2016). It has been suggested that a high level of PUFA in the hen diet causes a decrease in MUFA content in the egg yolk (Shahid et al. 2015); however, in this study, total PUFA was decreased in eggs from older hens, which suggests that the hen's bodily functions are reduced with age (Barzilai et al. 2012). Absorption in the small intestine decreases with age, which may be the cause of lower levels of PUFA as well as n-3 and n-6 in eggs from older hens (Woudstra and Thomson 2002). Eggs from younger hens have smaller egg yolks than those from older hens; therefore, the proportion of PUFA, n-3, and n-6 is higher in the smaller egg yolks (Nielsen 1998). Cholesterol in eggs can be altered by manipulating the hen's diet. Canola oil (Ismail et al. 2013), flaxseed (Chen et al. 2015), hemp seed (Shahid et al. 2015), and grape seed (Sun et al. 2018) were demonstrated to reduce cholesterol in eggs when added as supplements to the hens' diet. There were no differences in cholesterol content between different housing environments, and hen age had no effect on its content in the eggs (Zemková et al. 2007, Karsten et al. 2010, Anderson 2011. In contrast, Kovacs et al. (1998) have reported that cholesterol concentrations changed periodically during the laying cycle, decreasing from the beginning of the laying cycle to 45 weeks of age and increasing at 51-52 weeks of age, after which it declined to the end of laying period. In addition, the strain of hen has an effect on cholesterol levels in the eggs (Simčič et al. 2009). In this study, the amount of cholesterol was similar among both rearing systems and age groups. Karsten et al. (2010), Kucukyilmaz et al. (2012), and Anderson (2011) have indicated that rearing system has no effect on cholesterol levels in egg yolks, which supports our findings. Zemková et al. (2007) have also reported no age-related changes in cholesterol content in the eggs, which is confirmed in our study. This result could be attributed to the same layer hybrid and similar feed used in the studies. The shelf life of PUFA-rich foods is at a disadvantage because of their susceptibility to oxidative deterioration (Buckiuniene et al. 2016). In this study, MDA levels were higher in the organic eggs than in those from conventionally reared hens, which can be explained by the high proportion of PUFA in the organic eggs (Cengiz et al. 2015). Moreover, organically reared hens are exposed to environmental factors, which may stimulate the formation of cellular free radicals through the steroid synthesis mechanism in the hens . Vitamin E is the common name of a group consisting of four tocopherols and four tocotrienols, creating eight natural compounds (Rizvi et al. 2014). The α-tocopherol in this group has high biological value. The studies conducted on comparing eggs from organic and conventionally reared hens have shown that the amount of α-tocopherol is lower in the organic eggs (Matt et al. 2009, Mugnai et al. 2009), which was similar our observation. The low level of α-tocopherol in the organic eggs suggests that it is depleted as it protects the eggs against oxidative deterioration (Shahryar et al. 2010). Moreover, younger hens had lower α-tocopherol levels. The α-tocopherol levels in the eggs might also be influenced by their lipid content (PUFA, notably arachidonic acid, and docosahexaenoic acid) because α-tocopherol is a fatsoluble compound (Lebold andTraber 2014, Takahashi et al. 2017). On the other hand, vitamin E concentrations increased with hen aging in liver. The vitamin D group includes secosteroids, such as D1 (ergocalciferol + lumisterol), D2 (ergocalciferol), D3 (cholecalciferol), D4 (22-dihydroergocalciferol), and D5 (sitocalciferol), with D2 and D3 being the most biologically important (Calvo et al. 2005). Feed and sunlight are the two most important sources of vitamin D. The vitamin D2 content in the eggs from the organic system in the current study was higher than that in eggs from the conventional system. This result is believed to be caused by the beneficial effect of the daylight and the intake of natural vegetation when the hens accessed the outdoor area. Similarly, this was also the case for the levels of vitamin K2 in the egg yolks. The alfalfa flour intake by the organically raised hens might result in a higher level of the vitamin K2 in the organic eggs. Conclusions The results of the present study showed that PUFA, n-3, and n-6 were higher in the organic table eggs; however, the organic eggs were more susceptible to oxidative deterioration because of their high levels of PUFA. Vitamins E and K were also higher in organic eggs. We also determined that the levels of PUFA, n-3, and n-6 were higher in eggs from the younger hens. The rearing system and age also had an impact on other egg components. According to the results of the current study, rearing systems can have advantages and disadvantages in terms of egg composition; therefore, all aspects of these systems should be more fully evaluated. In the case of any rearing system preferences, beyond the quality of the eggs, the number of eggs should also not be ignored. Indeed, the number of the egg may be influenced negatively by organic or free-range raising system. This fact should be evaluated by considering the supply/demand balance. However, the regulations implemented by the governments should be noted in point of hen welfare in the rearing systems.
2020-10-16T22:23:48.046Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "17d60627f042545d828fed5cc5407878de74b7eb", "oa_license": "CCBY", "oa_url": "https://journal.fi/afs/article/download/91704/56188", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17d60627f042545d828fed5cc5407878de74b7eb", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
269862945
pes2o/s2orc
v3-fos-license
Disease Control and Treatment Satisfaction in Patients with Chronic Spontaneous Urticaria in Japan Background: Chronic spontaneous urticaria (CSU), characterized by the recurrence of pruritic hives and/or angioedema for >6 weeks with no identifiable trigger, has a negative impact on health-related quality of life (HRQoL). Methods: The objective of this web-based cross-sectional study was to evaluate disease control, disease burden, and treatment satisfaction in Japanese adults with CSU using the Urticaria Control Test (UCT), HRQoL outcomes, and the Treatment Satisfaction Questionnaire for Medication–9 items (TSQM-9). Results: In total, 529 adults were included in the analysis (59.9% female), with a mean ± standard deviation (SD) in CSU duration of 13.2 ± 13.0 years. Based on UCT scores, two-thirds of patients had poor (score of 0–7; 23.6%) or insufficient (score of 8–11; 43.3%) symptom control, and one-third had good control (score of 12–16; 33.1%). Overall treatment satisfaction was not high, with mean ± SD TSQM-9 scores of 55.5 ± 17.6% for effectiveness, 68.2 ± 18.8% for convenience, and 59.2 ± 18.4% for global satisfaction. No apparent differences in TSQM-9 scores were observed between patients receiving different medications. HRQoL outcomes were worse among patients with poor/insufficient symptom control. Conclusions: Japanese adults with CSU have a high disease burden, and better treatment options are needed to increase treatment satisfaction. Introduction Chronic spontaneous urticaria (CSU) is a chronic inflammatory skin disease characterized by the recurrence of wheals (hives), angioedema, or both for >6 weeks with no identifiable trigger [1,2].According to epidemiological studies in Chinese and Korean populations, the point prevalence of urticaria ranges from 0.8% to 4.5% [3,4].A Japanese epidemiological survey by Saito and colleagues reported that approximately two-thirds (66.8%) of patients with urticaria had CSU in 2020 [5].The pathogenesis of CSU is not fully elucidated, and its duration can range from months to years [1]. In 2018, a Japanese real-world study by Itakura and colleagues showed that chronic urticaria (CU; including CSU) was associated with an impaired health-related quality of life (HRQoL) and reduced work productivity, similar to that experienced with psoriasis or atopic dermatitis [6].Many patients with CU also reported low satisfaction regarding their condition and treatment [6].Such findings highlight the unmet needs of affected patients, including the need to improve treatment options in this setting. According to the 2018 Japanese guidelines for the management of urticaria, firstline treatment options include oral, non-sedating, or second-generation antihistamines (H1 receptor antagonists), while second-line options include the addition of H2 receptor or leukotriene receptor antagonists, tranexamic acid, diaphenylsulfone, anxiolytics, glycyrrhizin, neutropin (i.e., vaccinia virus-inoculated rabbit inflamed skin extract), or Chinese herbal medicine for those with persistent symptoms [1].The international joint-initiative guidelines from the European Academy of Allergology and Clinical Immunology (EAACI), Global Allergy and Asthma European Network (GA 2 LEN), European Dermatology Forum (EuroGuiDerm), and Asia Pacific Association of Allergy, Asthma, and Clinical Immunology (APAAACI) similarly recommend the use of standard-dosed modern second-generation H1 receptor antagonists in patients with CU, but do not recommend the combined use of H1 and H2 receptor antagonists [2].For patients who do not respond to antihistamines or the abovementioned second-line treatments, third-line options include omalizumab, cyclosporine, and short-term low-dose oral corticosteroids [1]. The anti-immunoglobulin E monoclonal antibody omalizumab was approved for the management of refractory CSU in Japan in March 2017 [7], and its effectiveness and safety have been confirmed in routine clinical practice [8].In a previous study of 90 Japanese adults with CSU by Kaneko and colleagues, global treatment satisfaction with omalizumab was 77.6% (compared with 72.2% with antihistamines) [9].However, this study reported lower patient-perceived convenience with omalizumab versus antihistamines [9], most likely due to the administration route (subcutaneous injection) of the biologic agent. Although previous studies have evaluated the disease burden and treatment satisfaction among adults with CSU, they included a small number of enrolled patients and were conducted in a controlled clinical setting (i.e., specialist dermatology departments) [9].Therefore, there is a need to examine more broadly the relationship between CSU disease control, disease burden, and treatment satisfaction in Japan, particularly as new treatments have emerged, such as biologics, and updated international and Japanese guidelines on urticaria management have been developed and disseminated [1,2].Therefore, we conducted an online questionnaire to evaluate current CSU disease control, disease burden, and treatment satisfaction among Japanese adults with CSU. Study Design This web-based, cross-sectional, observational study of patients with CSU was conducted in Japan from 4-25 April 2022 (UMIN-Clinical Trials Registry: UMIN000047616).Individuals who were registered on the 2021 general consumer panel of Rakuten Insight Inc. (a market research company) were invited to voluntarily participate in the survey.An email with a link to the online questionnaire, which was open for 1 month between April and May 2022, was sent to members of the consumer panel.All responses received within the survey period were accepted, collected as primary data, and anonymized by Rakuten Insight, Inc. The study was conducted in accordance with the principles of the Declaration of Helsinki and the Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan, issued by the Ministry of Health, Labour, and Welfare, the Ministry of Education, Culture, Sports, Science, and Technology, and the Ministry of Economy, Trade, and Industry. Study Participants Study participants were defined based on their survey responses.Adults (aged ≥ 20 years) with CSU who lived in Japan were eligible to participate.Participants who responded that they had previously been diagnosed with CSU or CU by a physician in the past, had urticaria symptoms for >6 weeks with no previous trigger, and had received treatment for CSU or CU in the 3 months prior to completing this survey were included.Because the diagnosis of CSU is not widespread in Japan, the diagnosis of CSU or CU was included in the selection criteria, and data from patients with CU were collected only if they had experienced symptoms for >6 weeks with no previous trigger.Patients may have also been diagnosed with allergic urticaria (i.e., an allergen/immunoglobulin E [IgE]-mediated ur-ticaria subtype that occurs due to exposure to food, drugs, plants (including natural rubber products), insect toxins, etc., as defined by the Japanese guidelines [1]), acute urticaria, or cold urticaria.Patients who had only received over-the-counter medications and had not recently visited a hospital were also included. Participants provided their informed consent before completing the questionnaire.Those who provided invalid responses were excluded from the analysis. Study Objectives The study objective was to describe treatment satisfaction among adults with CSU using the Treatment Satisfaction Questionnaire for Medication-9 items (TSQM-9).TSQM-9 scores were evaluated according to disease control (based on Urticaria Control Test [UCT] scores), CSU symptoms, and current/previous medications.Another objective was to describe the disease burden in adults with CSU using the following patient-reported outcomes (PROs): UCT, Numerical Rating Scale (NRS) for pruritus, burning, and sleep disturbance, Dermatology Life Quality Index (DLQI), Short Form-8 item (SF-8) health survey, and Work Productivity and Activity Impairment (WPAI) scores. Online Questionnaire The questionnaire included items regarding the participants' demographics, disease characteristics, treatments, and the following six PROs: (1) TSQM-9 (range 0-100%) [10], which is a validated tool that evaluates treatment satisfaction using nine questions across the effectiveness, convenience, and global satisfaction domains (higher scores indicated higher satisfaction for that domain) (of note, the TSQM-9 omits the treatment-related adverse effects domain of the TSQM due to its potential to influence outcomes in real-world studies); (2) UCT (range 0-16 points) [11], which is a validated PRO that includes four questions to comprehensively evaluate control of urticaria symptoms over a 4-week period (scores of 0-7 = poor symptom control; 8-11 = insufficient symptom control; 12-16 = good symptom control) [6]; (3) NRS for pruritus, burning, and sleep disturbance (range 0-10 points), which rates the average and peak intensity (severity) of symptoms over the last 7 days (scores of 0 = none; 1-3 = mild; 4-6 = moderate; 7-9 = severe; 10 = very severe); (4) DLQI (range 0-30 points) [12], which is a 10-item questionnaire designed to assess the impact of the disease on HRQoL (scores of 0-1 = no effect; 2-5 = small effect; 6-10 = moderate effect; 11-20 = large effect; 21-30 = extremely large effect); (5) SF-8 (range 0-100 points) [13], which is an abbreviated version of the original SF-36 health survey, and measures HRQoL over the past 1 month in two standardized domains (physical health summary and mental health summary; higher scores indicate improved HRQoL (of note, the Japanese national standard mean SF-8 score is 50)); and (6) WPAI questionnaire (range 0-100%) [14], which is a validated instrument used to assess the impact of disease on work and productivity impairment over the last 7 days with regard to absenteeism (working hours lost due to CSU), presenteeism (impaired work due to CSU), overall work productivity loss, and daily activity impairment. Statistical Analysis Based on the total number of individuals in the Rakuten Insight, Inc. general consumer panel for 2021 (approximately 2,200,000), we empirically estimated that 250,000 responses to the screening questions would be collected.We assumed that ≥350 responses would meet the inclusion criteria based on a small-scale feasibility assessment.Considering the limitations of the study's cost including the scale of licenses required, the target number for analysis was 500; a maximum of 550 responses were collected. Study outcomes were assessed using summary statistics (mean, standard deviation [SD], median, interquartile range, and minimum/maximum values).Descriptive statistics were supplemented by additional testing to assist with data interpretation by identifying potentially clinically significant differences.The Mann-Whitney U test was used for pairwise comparisons, the Kruskal-Wallis test was used to test for significant differences between UCT subgroups, the Jonckheere-Terpstra test was used to identify increasing or decreasing trends across the three UCT subgroups, and the Steel-Dwass test was used for multiple comparisons between subgroups; all tests were conducted with a significance level of 5%. Study Population Online questionnaire responses were collected from 163,285 individuals in the general population, of whom 605 adults met the study definition of CSU (corresponding to a point-prevalence of CSU of 0.4%).In total, 550 adults with CSU met the inclusion criteria, provided informed consent, and gave complete responses (Figure 1).After excluding 21 individuals due to invalid responses, 529 participants with CSU were included in the final analysis.Patients had a mean ± SD age of 45.3 ± 13.2 years, and 59.9% of the study population were female (Table 1).The mean ± SD duration of CSU was 13.2 ± 13.0 years, and 38.9% of the population (n = 206) had a history of CSU of ≥10 years.Only 16.4% reported a diagnosis of CSU, while most patients (93.8%) reported a diagnosis of CU.Of these patients with CU or CSU, 42.5%, 31.0%, and 18.0% also reported a diagnosis of allergic urticaria, acute urticaria, and cold urticaria, respectively.A previous or current history of angioedema was reported in 64 patients (12.1%).Other common comorbidities for which patients were currently receiving treatment included allergic rhinitis (n = 143; 27.0%), atopic dermatitis (n = 95; 18.0%), and hypertension (n = 84; 15.9%; Supplementary Table S1). 1 Patients could choose more than one response. 2An allergen/IgE-mediated urticaria subtype that occurs due to exposure to food, drugs, plants (including natural rubber products), insect toxins, etc., as defined by the Japanese guidelines [1].* p < 0.001 vs. no angioedema (Mann-Whitney U test); test performed ad hoc for the presence vs. absence of angioedema only in this table.CU = chronic urticaria; CSU = chronic spontaneous urticaria; IgE = immunoglobulin E; SD = standard deviation; UCT = Urticaria Control Tool. Among patients on current antihistamine therapy, patients receiving antihistamines alone had significantly higher UCT scores than those receiving antihistamines in combination with other drugs (mean ± SD 11.2 ± 3.2 vs. 9.2 ± 3.4; p < 0.001; Table 3).In addition, patients currently receiving an increased antihistamine dose had a significantly lower mean ± SD UCT score (7.8 ± 3.7) than those with a previous dose increase (9.7 ± 3.4; p < 0.001) or those with no dose escalation (10.6 ± 3.2; p < 0.001). There was also a significant trend towards lower UCT scores in patients receiving a higher number of current medications (p < 0.001 for decreasing trend).For example, patients receiving six or more current medications (n = 44; 8.3%) had a mean ± SD UCT score of 7.1 ± 3.2, whereas those receiving one or no current medications had mean ± SD UCT scores of 10.6 ± 3.3 and 11.6 ± 3.6, respectively (Table 3). 1 Patients could choose more than one response; 2 The remaining 34.0% of patients not using prescription antihistamines were either untreated or using ≥2 of the drugs listed above.As all OTC medications were classified as either oral or topical OTCs, some patients may have been using OTC antihistamines; 3 Extract of inflamed skin from vaccinia virus-inoculated rabbits.OCS = oral corticosteroids; OTC = over-the-counter; SD = standard deviation; TCS = topical corticosteroids; TSQM-9 = Treatment Satisfaction Questionnaire for Medication-9 items; UCT = Urticaria Control Tool. In patients currently receiving regular antihistamine therapy, TSQM-9 scores for all three domains were significantly higher in patients on antihistamine monotherapy than in those taking antihistamines in combination with other drugs (p < 0.01 for effectiveness and convenience and p < 0.001 for global satisfaction; Table 3).Use of a higher number of medications was associated with slightly lower treatment satisfaction scores, although the trend was not statistically significant for any of the TSQM-9 domains.Patients receiving six or more medications (n = 44; 8.3%) had mean ± SD TSQM-9 scores of 47.0 ± 16.8% for effectiveness, 57.2 ± 20.0% for convenience, and 50.3 ± 19.7% for global satisfaction, compared with 58.5 ± 18.1%, 71.0 ± 18.3%, and 61.8 ± 18.1%, respectively, among those receiving one medication, and 57.6 ± 22.5%, 69.7 ± 20.0%, and 63.2 ± 21.2%, respectively, among those receiving no medications (Table 3). Mean ± SD WPAI scores across a period of 7 days were low for absenteeism (i.e., fewer lost work hours) in all patients and across UCT subgroups, ranging from 2.2 ± 6.5% in patients with good symptom control (UCT score of 12-16) to 11.8 ± 22.6% in those with poor symptom control (UCT score of 0-7; p < 0.01; Figure 4D).WPAI absenteeism scores were also significantly lower in patients with insufficient control (UCT score of 8-11) than in those with poor control (p < 0.05).For the other three WPAI items (presenteeism, lost work productivity, and activity impairment), mean ± SD scores ranged from 50.1 ± 27.9% to 54.4 ± 26.2% in patients with poor symptom control, 31.2 ± 24.0% to 34.0 ± 25.1% in those with insufficient control, and 13.2 ± 21.4% to 14.7 ± 22.3% in those with good control.Scores for these three WPAI items were signifi-cantly lower in patients with good control than in those with poor control or insufficient control (p < 0.01 for both comparisons), and were significantly lower in patients with insufficient control than in those with poor control (p < 0.01). Overview In this web-based observational study of Japanese patients with CSU, many patients had longstanding disease (mean ± SD disease duration of 13.2 ± 13.0 years), and two-thirds had insufficient or poor symptom control based on UCT scores, a validated, easy-to-use tool that determines disease control in patients with all subforms of CU [11].Treatment satisfaction tended to be lower among those with poorer symptom control (i.e., lower UCT scores) overall and across TSQM-9 effectiveness, convenience, and global satisfaction domains.Patients with poor or insufficient symptom control also reported an increased disease burden, numerically higher pruritus, burning sensation, and sleep disturbance NRS scores, higher DLQI scores, lower SF-8 scores, and higher WPAI scores compared to those with good symptom control. Epidemiology In this study, confirmed diagnoses of CSU and CU were reported by 16.4% and 93.8% of patients, respectively.However, it should be noted that the disease term 'CU' in Japan is regarded as being almost identical to the international 'CSU' disease term (i.e., Japanese guidelines do not use the term 'CU') because the guidelines classify acute urticaria and CU after diagnosing spontaneous urticaria [1].As such, it is important to note that in this study, we considered a diagnosis of CU as CSU if patients had indicated typical characteristics of CSU (i.e., experienced urticaria symptoms for >6 weeks with no previous identifiable triggers).In addition, 42.5% of patients in this study reported a diagnosis of allergic urticaria.The study by Saito and colleagues reported that 66.8% of 1061 patients with urticaria were classified as having CSU in 2020, and only 0.8% as having allergic urticaria (i.e., mediated by type I hypersensitivity) [5], the latter of which was consistent with the allergic urticaria prevalence reported by a 2020 national patient survey from the Ministry of Health, Labor, and Welfare [15].This indicates that, in the current study, over 40% of patients and/or their physicians incorrectly identified the cause of their CSU or CU as being allergic in nature (i.e., allergen/IgE-mediated), despite having symptoms consistent with the definition of CSU (i.e., having no identifiable trigger).Together, these findings highlight the need to further educate both physicians and patients to raise disease awareness, facilitate easier CSU diagnosis, and increase the recognition of the international CSU disease term in Japan.In the general population of Japanese residents aged ≥20 years who responded to this online survey, the estimated point prevalence of CSU was 0.4%, which is similar to the previously reported urticaria point prevalence of 0.8% in China [3], with approximately two-thirds of patients with urticaria having CSU, as reported in the previous Japanese study by Saito and colleagues [5].In this study population, patients with CSU had a mean age of 45.3 years, and 59.9% were female, which is similar to the demographics of previous real-world studies from Japan [5,6] and Western countries [16]. The proportion of patients with a previous or current history of angioedema in our current study (12.1%) was lower than that previously reported in the ASSURE-CSU study in Western populations (40.3%) [17].In contrast, the randomized, double-blind, placebocontrolled phase III POLARIS trial of omalizumab treatment in an East Asian population with CSU reported angioedema in 16.4%-20.3% of its treatment groups [18].Similarly, the 2020 Japanese epidemiology study reported angioedema in 14.1% of patients with urticaria [5].Therefore, our finding regarding the low prevalence of angioedema is in line with previous publications, indicating that angioedema is less common in Asian versus Western populations with CSU.In the present study, patients with angioedema had significantly worse symptom control than those with no history of angioedema.This is in line with results from ASSURE-CSU, where disease severity and activity increased as the incidence of angioedema increased [17]. Treatment Satisfaction Since the 2018 study by Itakura and colleagues [6], there have been no reports of a relationship between urticaria control status and treatment satisfaction; however, the treatment of urticaria in Japan has advanced in more recent years.Treatment satisfaction for all patients in the current study (mean TSQM-9 scores of 55.5% for effectiveness, 68.2% for convenience, and 59.2% for global satisfaction) was notably lower than reported in the previous survey of 90 Japanese patients with CSU by Kaneko and colleagues (TSQM-9 scores of 68.6%, 72.0%, and 72.2%, respectively) [9].This discrepancy may be due to differences in therapeutic and explanatory approaches, as patients in the previous study were seen at specialist dermatology departments [9], while those in the current study were treated in various dermatology and non-dermatology clinics and hospital departments. In the current study, patients who were prescribed oral antihistamines had higher treatment satisfaction than those receiving other medications.Overall, inadequate symptom control is generally linked to lower treatment satisfaction in patients with CU [19].Patients who were only prescribed oral antihistamines may have milder and better controlled symptoms.Although the Japanese urticaria management guidelines recommend secondgeneration antihistamines (H1 receptor antagonists) as a first-line treatment [1], 13.2% of patients in our study reported never receiving prescribed oral antihistamines, potentially because they had only taken over-the-counter medications without consulting a physician or had only been prescribed topical medications. Despite the high frequency of prescribed topical corticosteroid use in the current study (35.0% of patients), this treatment was associated with lower UCT scores and treatment satisfaction than antihistamines.According to the Japanese and international guidelines for urticaria management, recommended treatment options do not include topical corticosteroids [1,2]; this is due to a lack of evidence of their efficacy in treating CSU, which may explain the lower UCT scores and low treatment satisfaction among patients using topical corticosteroids in our study.In contrast to our results, a previous Japanese survey of 90 patients with CSU seen at specialist dermatology departments showed that the prescription of topical corticosteroids is broadly in line with guideline recommendations, with only 3/90 patients (3%) using topical corticosteroids [9].While it is not clear why there is such a difference in the prescribing rate of topical steroids (35% vs. 3%), our results indicate a need to increase awareness among physicians of the standard guideline-directed therapies, as well as a need for more efficacious treatment options, which should lead to improved treatment satisfaction. In the current study, patients had numerically lower treatment satisfaction scores than those reported previously by Kaneko and colleagues [9], and patients receiving omalizumab had similar treatment satisfaction to those receiving other drugs.The lower treatment satisfaction observed in our study may be because the Kaneko et al. study was conducted in specialist dermatology departments, whereas our study evaluated treatment satisfaction in a real-world setting that included patients who did not receive expert medical care.However, particularly as the number of patients receiving omalizumab in the current study was low (n = 14; 2.6%), these findings and comparisons between other studies should be interpreted with caution. Disease Burden Patients with CSU are known to have a high disease burden [6,9].In the current study, pruritus average NRS scores were numerically lower than peak NRS scores.This may be a characteristic of urticaria, in which the transient erythema, wheals, and pruritus symptoms often appear and fade repeatedly [5].Pruritus, burning sensation, and sleep disturbance NRS scores were significantly higher among patients with lower UCT scores (i.e., poor symptom control).Of note, burning sensation is not typically assessed in patients with CSU, although this symptom may become more widely evaluated and reported in patients with CSU. In the current study, patients had low overall absenteeism scores, indicating that few patients missed work due to urticaria symptoms.In a previous web survey of Japanese patients with CU (n = 409), the mean ± SD absenteeism score was 2.4 ± 9.0 overall, 2.9 ± 9.0 in the UCT 0-7 subgroup, 1.7 ± 6.9 in the UCT 8-11 subgroup, and 2.8 ± 10.7 in the UCT 12-16 subgroup, with no trend observed according to the UCT score [6].In contrast, the current study showed a higher overall absenteeism score (6.1 ± 15.9) than that reported in the previous web survey [6], and found significantly higher absenteeism scores in patients with poor (UCT score of 0-7) versus insufficient (score of [8][9][10][11] or good (score of 12-16) symptom control.One possible explanation for this divergence between study results may be a difference in study populations.While the previous survey included patients with CU (defined as the presence of chronic symptoms (wheals, itching, and angioedema) persisting for >6 weeks) [6], the current study evaluated patients with CSU (defined as physician-diagnosed CSU or CU, with urticaria symptoms for >6 weeks with no previous trigger).Another reason may be a change in the CSU disease terminology over time.As the disease term 'CSU' has become more widely used and patients have become more accurately diagnosed in the past few years, patients with CSU (i.e., CU with no previous trigger) may have become more aware of their condition. Limitations The limitations of this study included those inherent to web-based questionnaires, such as that all study outcomes were PROs, and the possible influence of selection bias (participants could choose whether they responded to the questionnaire) and recall bias.However, while these limitations exist, this method of investigation has been employed in several studies previously [19,21,22], as studies of this type can provide insight into the real-world management and treatment of a disease.Additionally, given the small number of participants receiving omalizumab in the current study, findings regarding treatment satisfaction with this medication should be interpreted with caution. Conclusions This web-based questionnaire of patients with CSU in Japan found that disease burden was high, whereas treatment satisfaction was not high and showed no major differences when assessed according to current medications.To improve treatment satisfaction and reduce disease burden in patients with CSU, improved disease control is needed, as higher UCT scores correlate with higher treatment satisfaction and lower disease burden.Good disease control can be achieved with accurate diagnosis and appropriate therapy; thus, physicians should follow guideline-based treatment strategies.The current study found that topical steroids are frequently used, despite the lack of recommendations from either the Japanese or international guidelines, indicating that standardized treatment, as well as better treatment options, are needed for patients with CSU in Japan. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/jcm13102967/s1,Table S1: Currently treated comorbidities in the study cohort; Figure S1: Summary of scores in all patients and in subgroups based on UCT for DLQI sub-item scores. Author Contributions: Conceptualization: A.F., Y.K., K.A. and H.F. Formal analysis: A.F., Y.K., K.A. and H.F. Investigation: Y.K., K.A. and H.F. Writing-review and editing: A.F., Y.K., K.A. and H.F. Y.K. has full access to all the study data, and agrees to take responsibility for the integrity and accuracy of the data, and the decision to publish.All authors reviewed and agreed on all versions of the manuscript before submission, during revision, the final version accepted for publication, and any significant changes introduced at the proofing stage.All authors have read and agreed to the published version of the manuscript. Funding: This study and editorial assistance in the preparation of this article was supported by Sanofi K.K., Japan. Institutional Review Board Statement: The study protocol was approved by the Ethics Review Board of Medical Corporation Tokei-kai Kitamachi Clinic on 16 March 2022 (study number: OBS17568) in accordance with local regulations, including data protection laws. Informed Consent Statement: Participants provided their informed consent before completing the questionnaire. Figure 1 . Figure 1.Disposition of study population.* Includes one patient with a response time of ≥24 h. Figure 2 . Figure 2. Tukey box plot of UCT scores indicating the status of urticaria and angioedema over the past 4 weeks.The box represents the lower quartile (Q1), median, and upper quartile (Q3) values, the cross represents the mean value, the error bars represent the minimum/maximum values (excluding outliers), and the dots represent the outliers.SD = standard deviation; UCT = Urticaria Control Test. Table 1 . Patient demographics, baseline characteristics, and UCT scores for each category. Table 3 . UCT and TSQM-9 scores according to patient treatment status, antihistamine dose status, healthcare institute, and number of medications (N = 529).
2024-05-19T15:12:54.458Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "bb37e6d1deff01b7f16dc3d7708a69eb0148a290", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/13/10/2967/pdf?version=1715951661", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f8610d0bf8c4b416123fc5feb556d7d5e98726f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
231864938
pes2o/s2orc
v3-fos-license
Dietary iso- α -acids prevent acetaldehyde-induced liver injury through Nrf2-mediated gene expression Acetaldehyde is the major toxic metabolite of alcohol (ethanol) and enhances fibrosis of the liver through hepatic stellate cells. Additionally, alcohol administration causes the accumulation of reactive oxygen species (ROS), which induce hepatocyte injury-mediated lipid peroxidation. Iso- α -acids, called isohumulones, are bitter acids in beer. The purpose of this study was to investigate the protective effects of iso- α -acids against alcoholic liver injury in hepatocytes in mice. C57BL/6N mice were fed diets containing isomerized hop extract, which mainly consists of iso- α -acids. After 7 days of feeding, acetaldehyde was administered by a single intraperitoneal injection. The acetaldehyde-induced increases in serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels were suppressed by iso- α -acids intake. Hepatic gene expression analyses showed the upregulation of detoxifying enzyme genes, glutathione-S-transferase (GST) and aldehyde dehydrogenase (ALDH). In vitro, iso- α -acids upregulated the enzymatic activities of GST and ALDH and induced the nuclear translocation of nuclear factor-erythroid-2-related factor 2 (Nfe2l2; Nrf2), a master regulator of antioxidant and detoxifying systems. These results suggest that iso- α -acid intake prevents acetaldehyde-induced liver injury by reducing oxidative stress via Nrf2-mediated gene expression. Introduction Excessive consumption of alcoholic beverages is a leading cause of liver disease, including cirrhosis, liver cancer, and acute and chronic liver failure, worldwide [1]. The metabolism of ethanol generates reactive oxygen species (ROS), which play a role in the deterioration of alcoholic liver disease (ALD) [2]. There was a positive correlation between individual alcohol consumption and the ratio of ALD patients to all liver disease patients, and both are increasing in Asia [3]. Globally, in 2010, alcohol-attributable liver cirrhosis was responsible for 493,300 deaths. And alcohol-attributable liver cirrhosis was responsible for 225,900 deaths in Asia [3]. hexane layer and prepared the extract, which included iso-α-acids in high density (isohumulones content of more than 70% by HPLC analysis), by distilling off the hexane in an evaporator, and we used the extract for experiments [14]. Animal experiment Eight-week-old male C57BL/6NCrSlc mice were purchased from Japan SLC (Shizuoka, Japan) and were housed under controlled temperature (24 ± 1˚C), humidity (50-70%), and light (12 h light-dark cycle) conditions. The protocols for the animal experiments were approved by the Animal Use Committee of Sapporo Holdings Ltd., Research and Development Division (permission number: 2018-004). The eight-week-old male C57BL/6NCrSlc mice were fed the AIN-93G diet (Oriental Yeast, Tokyo, Japan) for a week and then were divided into two groups (n = 15 per group) based on similar average body weight. The control diet group was fed the AIN-93G diet, and the iso-αacid diet group was fed the AIN-93G diet containing 0.5% (w/w) iso-α-acids. Mice in the control group were pair-fed the control diet in the amount of the ad libitum intake of the iso-αacids group for a week. After a week feeding period, animals were treated with an intraperitoneal (i.p.) injection of acetaldehyde (200 mg/kg) [19]. For serum AST and ALT assay and liver tissue collection, tail vein blood samples were collected at 0, 1, 3, and 5 h after the acetaldehyde injection (200 mg/kg, i.p.) and liver samples were collected after blood sampling (5 h time point) under deep anesthesia (n = 10). For blood acetaldehyde concentration measurement, orbital sinus blood samples were collected at 5, 10, and 15 min intervals after the acetaldehyde injections (n = 5). Serum enzyme assays Serum was prepared from tail vein blood samples. Serum AST and ALT activity was measured using a transaminase CII test kit (Wako, Osaka, Japan). Determination of Aldehyde Dehydrogenase (ALDH) activity The ALDH activity of mouse liver tissues and murine hepatoma cells was determined by measuring the 6-methoxy-2-naphthoic acid that produced 6-methoxy-2-naphthaldehyde by the oxidation of the aldehyde group using HPLC with fluorescence detection [20]. We used the Agilent 1100 HPLC system (Agilent Technologies Japan, Tokyo, Japan) equipped with a Symmetry C18 HPLC column (2.1 × 50 mm, 3.5 μm: Nihon Waters, Tokyo, Japan). The separation of compounds was carried out by gradient elution. Solvent A was 1.0% formic acid, and solvent B was acetonitrile containing 1.0% formic acid. The gradient program was as follows: 0 min, 36% B; 0-4.5 min, linear gradient to 54% B. The flow was 0.5 mL/min, the column temperature was 40˚C, and naphthoic acid was detected by fluorescence (Ex. 310 nm/Em. 360 nm). Determination of Glutathione S-Transferase (GST) activity The GST activity of mouse liver tissues and Hepa1c1c7 cells was determined by the method of Habig WH et al. [21]. The assay mixture (180 μL) contained 100 mM potassium phosphate (pH6.5), 1.0 mM GSH and 10 μL liver or cell fraction. The assay was started by addition of 20 μL of 10 mM 1-chloro-2,4-dinitrobenzene, bringing the total volume to 200 μL. The initial velocity of 2,4-dinitrophenyl-S-glutathione generation was measured its absorbance at 340 nm. Determination of blood acetaldehyde levels Collected orbital sinus blood samples were immediately added to 500 μL 0.001% (w/w) t-butanol as an internal standard consisting of 0.6 N perchloric acid, vortexed and then centrifuged at 1,000×g for 3 min. Finally, 450 μL of the supernatant was transferred to 20 mL gas chromatography vials and used to determine blood acetaldehyde levels by gas chromatograph (GC). A GC equipped with a flame ionization detector (GC-2014, Shimazu, Japan) combined with a head space auto sampler (TurboMatrix 40, PerkinElmer) was used throughout the study. The chromatographic conditions, in short, were as follows: the column, injector and detector temperatures were 90, 110, and 200˚C, respectively. The separation column was a Supelcowax wide bore capillary (60 m length, 0.53 mm i.d., 2 μm film thickness, Supelco, PA, USA). Nitrogen was used as the carrier gas at 50 kPa [19]. DNA microarray experiment Total RNA was isolated from each liver sample using TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) and was subsequently purified using an RNeasy Mini Kit (Qiagen, Hilden, Germany) and RNase-Free DNase Set (Qiagen), according to the manufacturer's protocol. Total RNA quality and quantity were assessed by agarose gel electrophoresis and spectrophotometry. DNA microarray analysis was performed on liver samples from a total of ten mice (five mice each from the iso-α-acids and control diet groups) by choosing representative individuals from each group on the basis of their serum enzyme values. For each sample, biotinylated single-stranded cDNA was synthesized from 100 ng of total RNA using a GeneChip WT PLUS reagent kit (Thermo Fisher Scientific). The cDNAs were subsequently hybridized to a Clariom S Mouse Array (Thermo Fisher Scientific). The arrays were washed and labeled with streptavidin-phycoerythrin using a GeneChip Hybridization, Wash and Stain Kit and the Fluidics Station 450 system (Thermo Fisher Scientific). Fluorescence was detected using a GeneChip Scanner 3000 7G (Thermo Fisher Scientific). Affymetrix GeneChip Command Console software was used to reduce the array images to the intensity of each probe (CEL files). The CEL files were quantified using the Factor Analysis for Robust Microarray Summarization algorithm (quantile normalization, qFARMS) [23] with the statistical packages R [24] and Bioconductor [25]. Probe sets found to be differentially expressed between the iso-α-acids and control diet groups were identified according to the rank products method [26] using R. All microarray data were deposited in the National Center for Biotechnology Information Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/ ; GEO Series accession number GSE140387). Probe sets with a false-discovery rate (FDR) < 0.05 were considered to reflect the intake of the iso-α-acids. Gene-annotation enrichment analysis was then performed using the web tool Database for Annotation, Visualization, and Integrated Discovery (DAVID; http://david.abcc. ncifcrf.gov/) with Gene Ontology (GO). Benjamini-Hochberg FDR corrections were used to correct the results. GO terms with FDR-corrected p-values of < 0.01 were regarded as significantly enriched. Subsequently, Ingenuity Pathway Analysis (IPA, Qiagen) was used to identify activated/inactivated canonical pathways and upstream regulators by iso-α-acid intake. IPA calculated p-values using Fisher's exact test. In addition, IPA also calculated activation zscores. Canonical pathways and upstream regulators with a p-value < 0.05 were regarded as statistically significant. The pathways and upstream regulators with z-scores > 2 were regarded as significantly activated, and those with z-scores < -2 were regarded as significantly inactivated. Quantitative RT-PCR analysis Total RNA isolated from liver samples was used. The cDNA synthesis of total RNA was carried out using a ReverTra Ace qPCR RT kit (TOYOBO, Osaka, Japan) according to the manufacturer's instructions. To measure the mRNA amount in liver samples, RT-PCR was conducted with SYBR green dye using the LightCycler 480 SYBR system (Roche Applied Science, Mannheim, Germany). The oligonucleotide primers used for amplification were obtained from commercial products (Adh1, Aldh1, Aldh2, Gsta2, Gsta4, Gstm1, Gstt2, Sod1 and Actb (βactin) (Takara Bio, Shiga, Japan)). The expression of each gene was normalized to that of βactin mRNA. Cell culture and treatment Hepa1c1c7 (ECACC 95090613) murine hepatoma cells were cultured with alpha modified Eagle minimum essential medium supplemented with 10% fetal bovine serum, 100 units/mL penicillin, and 100 μg/mL streptomycin in an atmosphere of 5% CO 2 at 37˚C. Cells were seeded on 6-well plates at a density of 2.5 × 10 5 cells/well for each experiment and allowed to grow for 24 h. Cells were then treated with either vehicle (acetonitrile, 0.1%) or various concentrations (5, 25 or 100 ppm) of iso-α-acids for 48 h. The experimental conditions were in accord with a previous report [27]. Western blotting To examine the nuclear translocation of the transcription factor Nrf2, nuclear proteins were isolated from treated Hepa1c1c7 cells by using a LysoPure Nuclear and Cytoplasmic Extractor Kit (Wako). Nuclear proteins were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were transferred onto polyvinylidene difluoride membranes by using the Trans-Blot Turbo Transfer System (Bio-Rad Hercules, CA, USA), followed by blocking of nonspecific binding with 5% skim milk in Tris-buffered saline. Membranes were incubated with antibodies against Nrf2 (1:1,000; Cell Signaling Technology Japan, Tokyo, Japan) or β-actin (1:10,000; Cell Signaling Technology Japan) at 4˚C overnight. After washing with Tris-buffered saline containing 0.05% Tween 20, membranes were incubated with peroxidase-labeled secondary antibody (1:5,000 anti-rabbit) for 1 h, and then immune complexes were visualized using ECL Prime Western Blotting Detection Reagent (GE Healthcare Japan, Tokyo, Japan) and analyzed by the ChemiDoc XRS system (Bio-Rad). Statistical analysis All values are expressed as the mean ± standard error (S.E.M) of the mean. Statistical analysis was performed using Student's t test and Dunnett's test where appropriate by using JMP 13 (SAS Institute Japan, Tokyo, Japan). Differences were considered significant at p < 0.05. The preventive effects of iso-α-acid intake on acetaldehyde-induced liver injury To evaluate the effects of iso-α-acids administration on acetaldehyde-induced liver injury, mice were fed iso-α-acids or a control diet for one week before administration of a single i.p. injection of acetaldehyde (200 mg/kg). Their body weights and liver weights were not different between the two groups (S1 Table). We measured acetaldehyde i.p. injection-induced changes in serum AST and ALT levels of the mice. The administration of acetaldehyde lead to increased serum AST and ALT levels. The elevations in both AST and ALT levels were significantly lower in the iso-α-acid diet group than in the control diet group (Fig 1). This result suggested that the intake of iso-α-acids provided preventive effects in mouse liver against acetaldehyde-induced injury. Intake of iso-α-acids upregulated the gene of detoxification, antioxidation and ethanol degradation To characterize the mechanism underlying the preventive effects of iso-α-acids on acetaldehyde-induced liver disease, the global gene expression in mouse liver was analyzed by a DNA microarray technique. Principal component analysis of DNA microarray data revealed that mice fed the iso-α-acids diet and control diet formed clusters that were distinct from each other (S1 Fig). This result showed that the iso-α-acids diet significantly influenced the gene expression profile in mouse liver. We extracted differentially expressed genes (DEGs) statistically using the rank products method and identified 851 upregulated probe sets and 714 downregulated probe sets in the iso-α-acid diet group compared to the control diet group, with an FDR of < 0.05. DEGs were classified into functional categories according to Gene Ontology (GO) biological process terms using a modified Fisher's exact test (FDR < 0.01). The significantly enriched GO terms found in the upregulated and downregulated genes were applied to QuickGO to map the terms hierarchically (S2 and S3 Figs). To select DEGs related to liver injury, we focused on the upregulated genes including GO terms related to "glutathione metabolic process (GO:0006749)" ( Table 1). The expression of genes in the GST family was upregulated by the intake of iso-α-acids. The GST family are Phase II drug-metabolizing enzymes and have the function of detoxification and antioxidation. Next, DEGs were imported into the IPA software to identify biological networks and pathways (Table 2). This pathway analysis also showed the expression changes in genes in detoxification and antioxidation pathways such as "Nrf2-Mediated Oxidative Stress Response", "Glutathione-Mediated Detoxification" and "Glutathione Redox Reactions" pathways. These pathways were predicted to be activated. Furthermore, the analysis predicted the activation of ethanol degradation pathways. DNA microarray analysis and IPA revealed that the expression of genes related to detoxification, antioxidation and ethanol degradation were upregulated in the liver by iso-α-acid intake. Therefore, we analyzed the expression of these genes in the liver by quantitative RT-PCR analysis (Fig 2). The expression levels of Gsta4, Gstm1, Gstt2, Adh1, Aldh1 and Aldh2 were significantly increased in the iso-α-acid diet group compared to the control diet group. However, the expression levels of Gsta2 and Sod1 were not increased. These results indicated that the intake of iso-α-acids upregulated the expression levels of genes related to detoxification, antioxidation and ethanol-acetaldehyde degradation in mouse liver. Intake of iso-α-acids accelerated acetaldehyde metabolism To evaluate the effects of iso-α-acids intake on detoxification, antioxidation and acetaldehyde metabolism, the liver tissue samples were collected 5 hours after acetaldehyde treatment and prepared. GST and ALDH activities were significantly increased in the iso-α-acids diet group (Fig 3A and 3B). These results indicated that the intake of iso-α-acids provided strong Table 1. Upregulated genes related to the glutathione metabolic process that are altered in the liver following the consumption of an iso-α-acids diet by a DNA microarray analysis. Gene name Gene symbol FDR PLOS ONE Dietary iso-α-acids prevent acetaldehyde-induced liver injury antioxidative, detoxication and acetaldehyde degradation effects to mouse liver at the enzymatic activity level. Next, the level of TBARS, a biomarker of oxidative stress, in the liver was measured. The level of TBARS was reduced by iso-α-acid ingestion (Fig 3C). The time course of blood acetaldehyde concentrations was measured in the control diet and the iso-α-acids diet mice. The iso-α-acid diet mice exhibited accelerated acetaldehyde metabolism compared to the control PLOS ONE Dietary iso-α-acids prevent acetaldehyde-induced liver injury diet mice (Fig 3D and 3E). These results strongly suggested that the enhanced enzymatic activities in the liver facilitated a reduction in reactive oxygen species and the degradation of acetaldehyde in mice fed the iso-α-acids diet. Nrf2 accumulation in iso-α-acids treated murine hepatocyte cells Finally, we focused on the transcription factor Nrf2/Nfe2l2 as an upstream regulator of the expression of genes related to detoxification, antioxidation and ethanol-acetaldehyde degradation in the livers of mice fed the iso-α-acids diet. IPA canonical pathway analysis of DEGs in a DNA microarray analysis predicted the activation of the "Nrf2-Mediated Oxidative Stress Response" pathway as mentioned above. IPA upstream analysis predicted the activation of Nrf2 as an upstream regulator of the DEGs (activation Z-score: 5.506). These results strongly suggested the contribution of Nrf2 to the hepatic gene expression changes induced by iso-αacids intake. The accumulation of Nrf2 in the nucleus leads to the expression of antioxidant genes. However, the gene expression levels of Nrf2 and its specific repressor Kelch-like ECH-associated protein 1 (Keap1) were not changed in the liver by iso-α-acids intake in a DNA microarray analysis. Therefore, we examined the nuclear translocation of Nrf2 protein by using a murine hepatocyte cell line with iso-α-acids administration. First, we measured the GST and ALDH activities of the iso-α-acid-treated Hep1c1c7 cell lysates (Fig 4A and 4B). Overall GST and ALDH activities were enhanced by iso-α-acids treatment in vitro as well as in the liver. In vitro PLOS ONE Dietary iso-α-acids prevent acetaldehyde-induced liver injury analysis revealed that their activities were dose dependent. Even 5 ppm of iso-α-acids administration significantly enhanced both GST and ALDH activities. Next, Western blot analysis with the nuclear fraction of the iso-α-acids treated cells was performed. The results showed that iso-α-acids induced Nrf2 accumulation in the nuclear fraction of Hepa1c1c7 cells in a dose-dependent manner (Fig 5C and 5D). These results indicated that iso-α-acids upregulated the GST and ALDH activities and facilitated the nuclear translocation of Nrf2 in hepatocytes. Discussion In this study, we found that iso-α-acids intake suppressed the increase in serum AST and ALT induced with acetaldehyde. Furthermore, we revealed that iso-α-acids increased GST and ALDH activities through the Nrf2-mediated upregulation of gene expression. As a result, we demonstrated that iso-α-acid intake contributed to the hepatoprotective effect. A previous study showed that iso-α-acids-rich extracts attenuated alcohol-induced hepatic steatosis but did not affect hepatic markers, such as AST and ALT [28]. However, we showed that AST and ALT levels were significantly decreased in the iso-α-acids diet group compared with the control diet group (Fig 1). We speculated that the differences were caused by the exposure to ethanol rather than acetaldehyde. The authors in the previous study administered one oral bolus of ethanol, while we administered acetaldehyde i.p. to induce liver injury in mice. Because the ALDH activity in mouse liver is very high, small traces of acetaldehyde were seen in mouse blood after ethanol injection [19]. Therefore, liver injury in mice might have been insufficient in the previous study. Acetaldehyde injection induced a higher accumulation of acetaldehyde in the blood than ethanol injection [19]. In fact, the AST and ALT levels after acetaldehyde administration in this study were higher than those after ethanol administration in the previous study. GST catalyzes the reaction of reduced glutathione with xenobiotics. Additionally, the GST molecular species exert an antioxidative effect by working as hydrogen peroxide scavengers [29]. GST subsets are categorized into seven classes (alpha, mu, pi, theta, sigma, zeta and omega) [30]. Gsta4, of the alpha class, is a key enzyme for the removal of 4-hydroxy nonenal (4-HNE), produced by lipid peroxide [31,32]. Gstt2, of the theta class, has glutathione peroxidase activity and reduces lipid peroxide to alcohol [33]. A previous study reported that GST activated by quercetin can protect against acute alcohol-induced liver injury in mice [34]. Our results showed that iso-α-acids significantly increased GST activity, suggesting that GST participated in the protective effect against alcohol-induced liver oxidative injury. Acetaldehyde is a reactive compound that can interact with the thiol and amino groups of DNA, protein, and lipids. Acetaldehyde adducts, such as 4-HNE, may cause the inhibition of protein function, cause an immune response, inhibit ALDH2 and GST activity, and thereby exacerbate ALD [35,36]. MDA is one of the lipid peroxidation decomposition products and is widely used as a main marker of lipid peroxidation [37]. The TBARS method is a technique used to detect MDA spectrophotometrically through the reaction of MDA with thiobarbituric acid [22]. A previous study showed that ALDH2 contributes to the prevention of ALD by removing acetaldehyde and lipid peroxidation-derived aldehydes, such as MDA and 4-HNE [38,39]. The iso-α-acid diet decreased TBARS levels in the liver, suggesting that iso-α-acids intake inhibits the accumulation of lipid peroxide in the liver by increasing ALDH and GST activity. Nrf2 and its specific repressor Keap1 mediate cellular responses to oxidative stress and regulate the transcription of the genes including enzymes involved in detoxification, antioxidation and ethanol-acetaldehyde degradation. Nrf2, once dissociated from Keap1, translocates to the nucleus and facilitates the expression of target genes by binding to antioxidant responsive elements in their promoter regions [11]. Nrf2 regulates the expression of multiple antioxidant and detoxification genes during oxidative stress. Nrf2 is ubiquitinated by Keap1 and is degraded rapidly by the proteasome; thus, the Nrf2 protein level is low in cells at steady state [40]. When cells are exposed to toxins or oxidative stress, they are detected by Keap1 cysteine residues and then modified. Such molecular events reduce the ubiquitination level of Nrf2 and lead to its stabilization and nuclear accumulation [41]. Nrf2 binds to antioxidant response elements in the nucleus and promotes the transcription of antioxidant and detoxification genes, such as the subunits of GST, ALDH, and others [42]. Our in vitro assay showed that iso-αacids induced the nuclear accumulation of Nrf2 and the elevation of GST and ALDH enzymatic activities. These results strongly suggest that iso-α-acids protected against liver injury by regulating the Nrf2-Keap1 signaling pathway. Further studies such as the observation of Nrf2 nuclear localization in primary hepatocytes and/or in live samples from iso-α-acids fed animals are required to explain the precise mechanisms involved in Nrf2-mediated antioxidant effects of iso-α-acids. Similar observations using nuclear transport inhibitor or proteasome inhibitor treated-cells and Nrf2-null mice would provide further insight. Several compounds have been reported as Nrf2 activators. Dimethyl fumarate and bardoxolone methyl are clinically used as a major drug for psoriasis and multiple sclerosis and as a candidate drug for chronic kidney disease, respectively. Plant-derived compounds, including epigallocatechin gallate, sulforaphane, resveratrol and carnosic acid, are also known as activators of the Nrf2-Keap1 signaling pathway and are used as clinical drugs as well as functional food factors [43]. These compounds activate Nrf2 by chemical modification of Keap1 cysteine residues [44]. In particular, sulforaphane upregulates the expression of ALDH and GST genes and protects against alcoholic liver disease via the Nrf2-Keap signaling pathway [21,[45][46][47]. Since the present study is consistent with these reports, iso-α-acids may modify Keap1 cysteine residues and activate the pathway. In the previous study, iso-α-acids prevented nonalcoholic fatty liver disease in mice at a dose of 0.5% (w/w) in the diet [48]. And intake of iso-α-acids significantly enhanced ALDH activities at doses of 0.5% or more (S2 Table). We selected the dose of iso-α-acids based on these studies. This concentration is equivalent to approximately 300-500 mg/kg body weight. Commercial beer contains hop-derived iso-α-acids at a concentration of approximately 20-40 mg/L. It was shown that the blood concentration of iso-α-acids reached approximately 0.1 ppm at 30 min after the consumption of 600-800 mL of beer in humans [49]. This study showed that 5 ppm iso-α-acids was effective in hepatocytes. Therefore, the contribution of hop-derived iso-α-acids in beer for the prevention of liver injury may be very small. Conclusions In conclusion, we showed that iso-α-acids intake protected against acetaldehyde-induced liver injury. Iso-α-acids upregulated the expression of the GST and ALDH genes, resulting in increase of their overall activity. Moreover, iso-α-acids promoted the nuclear accumulation of Nrf2. We conclude that iso-α-acids enhanced antioxidation and detoxification and protected against ROS and xenobiotic substances in the liver. Iso-α-acids intake from beer may have some impact on liver protection, although the amount of iso-α-acids in beer is very small. However, the extrapolation of animal data to humans has not been clarified yet in this study. Further investigations, such as animal tests of using ethanol-fed mice, ALDH2 knockout mice, biogenetics, and dose assessment as well as human studies, are required to develop hepatoprotective functional foods containing iso-α-acids. Supporting information S1 Table. The effect of iso-α-acids intake on body weight and liver weight in mice. (DOCX) S2 Table. Body weights, serum biochemical parameters, and hepatic biochemical parameters for three doses of iso-α-acids (without acetaldehyde treatments).
2021-02-11T06:16:34.026Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "de092cbd8d8690f060d980c90772ff72e28afa30", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246327&type=printable", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "64a42f2c8a4c67865305f07de59e24f3864253a2", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
233237826
pes2o/s2orc
v3-fos-license
Risk of gout among Taiwanese adults with ALDH-2 rs671 polymorphism according to BMI and alcohol intake Background Gout stems from both modifiable and genetic sources. We evaluated the risk of gout among Taiwanese adults with aldehyde dehydrogenase-2 (ALDH2) rs671 single nucleotide polymorphism (SNP) according to body mass index (BMI) and alcohol drinking. Methods We obtained information of 9253 individuals having no personal history of cancer from the Taiwan Biobank (2008–2016) and estimated the association between gout and independent variables (e.g., rs671, BMI, and alcohol drinking) using multiple logistic regression. Results Alcohol drinking and abnormal BMI were associated with a higher risk of gout whereas the rs671 GA+AA genotype was associated with a lower risk. The odds ratios (ORs) and 95% confidence intervals (CIs) were 1.297 and 1.098–1.532 for alcohol drinking, 1.550 and 1.368–1.755 for abnormal BMI, and 0.887 and 0.800–0.984 for GA+AA. The interaction between BMI and alcohol on gout was significant for GG (p-value = 0.0102) and GA+AA (p-value = 0.0175). When we stratified genotypes by BMI, alcohol drinking was significantly associated with gout only among individuals with a normal BMI (OR; 95% CI = 1.533; 1.036–2.269 for GG and 2.109; 1.202–3.699 for GA+AA). Concerning the combination of BMI and alcohol drinking among participants stratified by genotypes (reference, GG genotype, normal BMI, and no alcohol drinking), the risk of gout was significantly higher in the following categories: GG, normal BMI, and alcohol drinking (OR, 95% CI = 1.929, 1.385–2.688); GG, abnormal BMI, and no alcohol drinking (OR, 95% CI, = 1.721, 1.442–2.052); GG, abnormal BMI, and alcohol drinking (OR, 95% CI = 1.941, 1.501–2.511); GA+AA, normal BMI, and alcohol drinking (OR, 95% CI = 1.971, 1.167–3.327); GA+AA, abnormal BMI, and no alcohol drinking (OR, 95% CI = 1.498, 1.256–1.586); and GA+AA, abnormal BMI, and alcohol drinking (OR, 95% CI = 1.545, 1.088–2.194). Conclusions Alcohol and abnormal BMI were associated with a higher risk of gout, whereas the rs671 GA+AA genotype was associated with a lower risk. Noteworthy, BMI and alcohol had a significant interaction on gout risk. Stratified analyses revealed that alcohol drinking especially among normal-weight individuals might elevate the risk of gout irrespective of the genotype. Background Gout is a metabolic disease that results from monosodium urate crystal deposits that are generally associated with high levels of urate serum [1,2]. It is common worldwide and its incidence and prevalence are purportedly increasing [3]. Taiwan is among the top-tiered countries with a high prevalence of gout in the world [3,4]. Data from Nutrition and Health Survey in Taiwan (NAHSIT) from 1993-1996 to [2005][2006][2007][2008] showed an increase in the prevalence of gout from 4.74 to 8.21% in men and 2.19 to 2.33% in women [5]. Moreover, a nationwide study revealed a prevalence of 6.24% and an incidence of 2.74 per 1000 person-years in 2010 [4]. ALDH2 is a vital enzyme in the metabolism of alcohol [22,23]. The ALDH2 variant, rs671 is a missense SNP that impedes the enzymatic activity of the ALDH2, probably impacting metabolism that results in uric acid synthesis [24]. ALDH2 polymorphisms contribute not only to the metabolism of ethanol and acetaldehyde [25] but also impact predisposition to alcohol-related morbid conditions like hyperuricemia and gout among Asians [18,19,[26][27][28]. The link between ALDH2 polymorphisms and serum urate was found to be mediated by alcohol intake among Han Chinese men [19]. ALDH2 rs671 is proven gout-related SNP [22,29,30]. Insights into interconnections between modifiable and genetic factors could aid in both the prevention and management of diseases. So far, a meta-analysis revealed that alcohol intake could modulate the link between BMI and ALDH2 rs671 among Koreans and Chinese [31]. Moreover, findings from GWAS suggest that BMIassociated alleles of rs671 are also linked to alcohol drinking behavior [25] and alcohol clearance [23]. The role of both BMI and alcohol drinking in the risk of gout according to ALDH2 rs671 genotypes has not been sufficiently investigated. As such, it is currently inconclusive whether the risk of gout varies based on the combination of these variables. In this study, we evaluated ALDH-2 rs671 polymorphism and the risk of gout according to two modifiable factors (BMI and alcohol intake) among Taiwanese adults. Data source and sample size We used data from the Taiwan Biobank dataset (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). The Taiwan Biobank was established to build a data resource consisting of lifestyle and genetic data of a large cohort of Taiwanese adults aged 30 to 70 years. Data collection at Taiwan Biobank recruitment centers is done through questionnaires, biochemical, and physical examinations by well-trained personnel. Each participant signed a consent form prior to the collection of data. Initially, 9553 individuals filled the Taiwan Biobank questionnaires (containing data on alcohol drinking, sex, age, cigarette smoking, coffee/tea intake, exercise, and diet) and underwent both physical (e.g., weight, height, waisthip ratio, and body fat) and biochemical tests (including genotyping, blood urea nitrogen, creatinine, HDL, LDL, and TG). However, 300 of them were ineligible for the study due to missing information. Hence, 9253 individuals were included in the final analyses. The Institutional Review Board of Cheng Ching General Hospital approved this study (HP200010). Description of variables Gout cases were those who self-reported a clinical diagnosis of gout or those who were confirmed by biochemical tests to have serum urate levels ≥ 7 mg/dL (men) or ≥ 6 mg/dL (women). Alcohol drinking was defined as an intake of 150 cc of any alcoholic drink per week continuously for at least 6 months and at the time of data collection. No drinking was defined as drinking less than 150 cc of alcohol per week continuously for at least 6 months. Body mass index, calculated as weight (kg) divided by height squared (m 2 ) was categorized into normal 18.5 ≤ BMI < 24 kg/m 2 and abnormal 0 ≤ BMI < 18.5 and BMI ≥24 kg/m 2 . Waist-hip ratio (WHR), calculated as the ratio of waist to hip circumference was grouped into normal (< 0.9 for men and < 0.85 for women) and abnormal (≥ 0.9 for men and ≥ 0.85 for women). Body fat was classified as normal (< 25 for men and < 30% for women) or abnormal (≥ 25 and ≥ 30% for men and women, respectively). Tea consumption referred to drinking tea at least once per day. Exercise, cigarette smoking, coffee intake, and vegetarian diet were defined as previously elaborated [32][33][34]. Blood urea nitrogen levels above 20 mg/dL and creatinine levels (≥ 1.4 mg/dL in men and ≥ 1.2 mg/dL in women) were considered abnormal. Statistical analyses The SNP (rs671) passed the quality control criteria (Hardy-Weinberg Equilibrium test p-value > 0.001), minor allele frequency ≥ 0.05, and call rate ≥ 95%. Chisquare test was used to estimate differences between categorical variables and the results were presented as n (%). The Student's t-test was used to estimate differences between continuous variables and the results were presented as mean ± standard deviation (S.D). The interaction between BMI and alcohol drinking and the odds ratios for the association between the dependent (gout) and independent variables (rs671, BMI, alcohol drinking, etc.) were estimated using the multiple logistic regression analysis. In the regression models, we adjusted for covariates, including, sex, age, WHR, body fat, cigarette smoking, coffee intake, tea consumption, exercise, diet, blood urea nitrogen, creatinine, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and triglycerides (TG). We used the dominant model for the SNP data because the enzyme activity in those with the rs671 GG genotype is higher compared to the AG and AA [24]. Moreover, a previous GWAS on gout and rs671 suggested that the dominant model is the model most likely to have higher statistical significance [22]. Data were managed and analyzed using PLINK v1.90 and SAS 9.4 software and the statistical threshold was set at p-value < 0.05 or Bonferroni correction value. Table 1 presents the demographic features of cases (n = 2352) and non-cases (n = 6901) of gout. Individuals with and without gout were significantly different based on ALDH2 rs671 genotypes (p-value = 0.0122), alcohol drinking (p-value < 0.0001), and BMI (p-value < 0.0001). Table 2 shows the relationship of alcohol drinking, rs671 polymorphism, and BMI with gout. Alcohol drinking (reference, no drinking) and abnormal BMI (reference, normal BMI) were associated with a higher risk of gout while the GA+AA genotype (reference, GG) was associated with a lower risk. The ORs; 95% CIs; p-values were 1.297; 1.098-1.532; 0.0022 for alcohol drinking, 1.550; 1.368-1.755; < 0.0001 for abnormal BMI, and 0.887; 0.0240 for the GA+AA genotype. The interaction between BMI and alcohol on gout was significant (pvalue = 0.006). However, the interaction of rs671 with alcohol and BMI was not significant (Table 2). Table 3 shows the association of alcohol drinking and BMI with gout stratified by rs671 genotypes (GG and GA+AA). Both BMI and alcohol drinking were associated with a higher risk of gout. For alcohol, the association was significant in only the GG category (OR = 1.289; 95% CI = 1.048-1.586; p-value = 0.162). However, for BMI, the association was significant in both the GG (OR = 1.584; 95% CI = 1.332-1.883; p-value < 0.0001) and GA+AA (OR = 1.518; 95% CI = 1.268-1.818; p-value < 0.0001) categories. The interaction between BMI and alcohol on gout was significant for both GG (p-value = 0.0102) and GA+AA (p-value = 0.0175). Results Tables 4 and 5 illustrate the association between alcohol drinking and gout among participants with ALDH2 rs671 GG and GA+AA stratified by BMI. Alcohol drinking was significantly associated with gout only among individuals with a normal BMI. This results were observed for both GG: OR; 95% CI; p-value = 1.533; 1.036-2.269; 0.0325 (Table 4) and GA+AA: OR; 95% CI; p-value = 2.109; 1.202-3.699; 0.0092 (Table 5). Table 6 shows the risk of gout in relation to the combination of BMI and alcohol drinking among participants stratified by ALDH2 rs671 genotypes. Compared to the reference category (no alcohol drinking and normal BMI), the risk of gout was significantly higher for both GG (Tables 2, 3, 4, 5, 6, and 7) included sex (high risk in men compared to women), HDL-C (lower risk), LDL-C (higher risk), and TG (higher risk). Discussion In the present study, the rs671 GA+AA genotype was associated with a lower risk of gout, while alcohol and abnormal BMI were associated with a higher risk. Of note, BMI and alcohol had a significant interaction on gout risk among individuals with GG and GA+AA. However, there was no significant interaction of rs671 with either BMI or alcohol drinking. Stratified analyses revealed that alcohol drinking especially among normal-weight individuals could confer susceptibility to gout, irrespective of genotype. These findings confirm the major role of alcohol consumption in the risk of gout. However, we cannot state the precise underlying biological mechanisms. Similar to our results, significant interactions between BMI and alcohol on hyperuricemia have been documented [17,35]. Based on their findings, Shiraishi and Une advised obese people to reduce the amount of alcohol they consume [35]. Many past studies reported significant associations between gout and rs671 [22,29,30,36]. This variant was described as a real gout-SNP [22,29,30]. The A allele of the rs671 has been linked to reduced susceptibility to gout [22]. ALDH2 rs671 also demonstrated the strongest GWA significance for alcohol drinking [21]. It was found to be related to alcohol drinking habits and alcohol flushing responses in Asians [25,37]. Rapid metabolism of acetaldehyde and ethanol associated with a homozygous ALDH2 rs671 genotype was linked to higher levels of UA in Japanese alcoholic men [26]. The relationship between gout and rs671 could in part be accounted for by alcohol drinking [22]. Previous studies on the risk of gout based on alcohol consumption showed conflicting findings. Most pioneer epidemiological research reported no association, probably because of a relatively small number of gout cases and failure to adjust for vital confounders [38][39][40]. Nonetheless, subsequent studies with higher gout cases showed significant associations [13,16]. A potential explanatory mechanism implicated in the relationship between gout and alcohol is that it enhances uric acid production and the hepatic breakdown of adenosine triphosphate (ATP) [41]. Moreover, alcoholic drinks like beer are rich in purine, which is associated with high levels of uric acid [42]. Evidence from a study using the UK biobank data suggested that genetic polymorphisms have a strong effect on gout regardless of BMI [43]. ALDH2 rs671 attained a significant genome-wide association for BMI [31] and was reported as the only locus having a significant independent association with BMI [31]. Numerous prospective studies on Asians, Europeans, and Americans suggested that BMI is positively related to the odds of gout and this relationship is possibly mediated by several factors [8,9,39,[43][44][45][46][47][48][49][50][51][52]. However, there were also reports of no significant relationship between BMI and gout [40]. The role of BMI in gout pathogenesis could be elucidated based on how leptin responds to inflammation related to monosodium urate crystals [53,54]. BMI could also cause gout through its effect on serum urate [52,55], potentially through insulinemia which affects renal reabsorption and uric acid clearance [56][57][58][59]. Previous studies also had similar findings on the risk of gout pertaining to sex, cigarette smoking, lipoproteins, and other variables [6,7,60,61]. The current study is limited in that the gout population in this study may not be representative of gout patients in the general population. This is because about 33% of gout cases were women. This percentage appears high given that the prevalence of gout in Taiwanese men is about 4 times higher than that in women. Moreover, we defined cases as those who self-reported a clinical diagnosis of gout or those with uric acid levels ≥ 7 mg/ dL (men) or ≥ 6 mg/dL (women). However, there was no information regarding patients on effective ULT and so the results are possibly not generalizable. In addition, the cohort is 25% gout cases and is thus closer to a casecontrol cohort than a general population sample. Another limitation of our study is that we could not clearly explain the precise biological mechanisms underlying the reported relationships. Conclusion Alcohol and abnormal BMI were associated with a higher risk of gout, while the rs671 GA+AA genotype was associated with a lower risk. Of note, BMI and alcohol had a significant interaction on gout risk among individuals with GG and GA+AA. Stratified analyses revealed that alcohol drinking, especially among normalweight individuals confers a great risk of gout irrespective of genotype. These findings confirm the major role of alcohol consumption on gout and so both normal weight and abnormal weight individuals are advised to reduce the amount of alcohol they consume. Reducing the amount of alcohol intake could play a great role in public health as it might mitigate the risk of gout.
2021-04-15T14:07:04.013Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "e54bfee5440e8b0d3e670a658db33cc16f61cf8a", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-021-02497-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e54bfee5440e8b0d3e670a658db33cc16f61cf8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249930846
pes2o/s2orc
v3-fos-license
Severe tropical cyclones over southwest Pacific Islands: economic impacts and implications for disaster risk management Tropical cyclones (TCs) are amongst the costliest natural hazards for southwest Pacific (SWP) Island nations. Extreme winds coupled with heavy rainfall and related coastal hazards, such as large waves and high seas, can have devastating consequences for life and property. Effects of anthropogenic climate change are likely to make TCs even more destructive in the SWP (as that observed particularly over Fiji) and elsewhere around the globe, yet TCs may occur less often. However, the underpinning science of quantifying future TC projections amid multiple uncertainties can be complex. The challenge for scientists is how to turn such technical knowledge framed around uncertainties into tangible products to inform decision-making in the disaster risk management (DRM) and disaster risk reduction (DRR) sector. Drawing on experiences from past TC events as analogies to what may happen in a warming climate can be useful. The role of science-based climate services tailored to the needs of the DRM and DRR sector is critical in this context. In the first part of this paper, we examine cases of historically severe TCs in the SWP and quantify their socio-economic impacts. The second part of this paper discusses a decision-support framework developed in collaboration with a number of agencies in the SWP, featuring science-based climate services that inform different stages of planning in national-level risk management strategies. Introduction The southwest Pacific (SWP) Island nations ( Fig. 1) are prone to catastrophic impacts of strong winds and storm surges, as well as extreme rainfall and associated flooding, during the passage of tropical cyclones (TCs). For example, severe TC Winston (February 2016) caused several fatalities and substantial damage to the infrastructure and agricultural sectors across the Fiji Islands. The direct economic loss (or damage) 1 (hereafter referred to as 'economic loss' or simply 'loss') resulting from this cyclone accounted for ~ 20% of the country's gross domestic product (GDP) (Esler et al. 2016). A year earlier, the economic loss from severe TC Pam (March 2015) exceeded 60% of Vanuatu's GDP, derailing the country's budget and fiscal position, and posing serious consequences for growth and development (McGee et al. 2015;World Bank 2018). The impacts of TCs on societies in SWP Island nations are likely to be further exacerbated in a warming climate due to the increasing influences of greenhouse anthropogenic climate change, and through growing coastal population and infrastructure development (Lal 2011). For example, a number of studies have indicated that TCs are likely to become more intense with warming, and their rain-carrying capacity is also likely to increase substantially (see, for example, a review by Knutson et al. (2020)). Such changes, combined with accelerated sea-level rise, can cause increased rates of coastal flooding and storm surges, with potentially drastic economic impacts on societies and the country as a whole (Fig. 2, Woodruff et al. (2013)). Several studies have investigated the trends in the normalised economic loss (i.e. taking into account inflation, increasing population and wealth) due to TCs over the past decades in different regions around the globe. For example, in a recent study over the USA, Grinsted et al. (2019) revealed an emergent increasing trend in economic loss attributed to a detectable change in extreme storms due to global warming. In India, Raghavan and Rajesh (2003) also showed an increasing trend in the normalised economic loss due to TCs over Andhra Pradesh, but the trend there was attributable mainly to economic and demographic factors rather than any long-term changes in TC characteristics. Similarly, over the mainland of China, Zhang et al. (2009) examined (without normalisation) the direct economic losses and casualties caused by landfalling TCs during the 1983-2006 period. They reported that even though the mortality from TCs shows no trend, the economic costs have increased rapidly during the nation's productivity boom. In the SWP region, Mohan and Strobl (2017) used satellite measures of nightlight intensity to show the short-term impact of TC Pam on the economic activity of the affected islands across Vanuatu. However, as far as we are aware, no literature as yet exists that addresses the long-term changes in TC-related economic losses over the past decades for the SWP region. The overall objective of this work is twofold. We first examine the link between severe TCs and the consequent economic and human losses over several SWP island nations. In order to better capture year-to-year variability of the impacts (both economic losses and fatalities) associated with severe TCs, normalisation of the losses is performed to take into consideration inflation and changes in population and wealth (e.g. Pielke and Landsea (1998)). The other objective is to understand the role of local agencies (such as the National Disaster Management Office, NDMO and the National Hydrological and Meteorological Services, NHMS) in mitigating the losses Woodruff et al. 2013): case 1 represents a condition in present-climate without any TC, and case 2 represents a likely condition in future-climate in presence of a severe TC event through provisions of early-warning systems and planning strategies at various stages of a TC event. The rest of the paper is as follows. Section 2 presents the data and methodology, and Sect. 3 provides results and discussion. Section 4 summarises the findings of this research. Data and methodology Estimates of TC economic loss were obtained from the respective Government's TC Disaster Assessment and NHMS reports for the study period 1970-2018. Several of these reports are archived at the United Nations Office for the Coordination of Humanitarian Affairs' (UN-OCHA) 'ReliefWeb' portal. 2 Comprehensive TC impact loss data, such as those over the USA (e.g. Pielke et al. (2008)), are also archived at the Pacific Damage and Loss (PDaLo) portal, 3 but due to some discrepancies with the available Government and NHMS reports (e.g. for TC Gita and TC Winston in 2018), such data are only used here for subjective verification purposes. To best represent year-to-year variability, economic losses due to TCs are normalised to take into consideration inflation and changes in wealth and population over time. We use here the conventional approach (e.g. Pielke et al. (2008) and Grinsted et al. (2019)) to normalise TC impact data relative to the 2016 values (note here that 2016 is chosen subjectively as the reference year due to the impact of severe TC Winston in that year over the SWP region). The normalisation methodology assumes that losses are proportional to three factors: inflation, wealth and population. The general formula to normalise the loss in a particular year relative to the year of interest (in our case 2016) is as follows: • NL 16 is the normalised loss relative to the 2016 values. • y is the year of TC impact. • L y is the loss in year y, in current-year dollars (and not adjusted for inflation). • I y is the inflation adjustment factor. This factor is computed using the Implicit Price Deflator for Gross Domestic Product (IPDGDP) and is the ratio of the 2016 IPDGDP to that in the TC impact year (e.g. Pielke et al. (2008)). • W y is the wealth factor, computed using the nominal GDP, the inflation adjustment factor, I y , and the population. • P y is the population factor. A detailed description of the normalisation procedure could be found in Pielke et al. (2008) and Grinsted et al. (2019). For the purpose of this study, IPDGDP, GDP and population data were obtained from the World Bank Open Data Online Portal. 4 Moreover, TC intensity and track data were obtained from the Southwest Pacific Enhanced Archive for Tropical Cyclones (SPEArTC: Diamond et al. (2012)) database. (1) NL 16 = L y × I y × W y × P y Fig. 3. Over Fiji and Vanuatu, the number of severe TCs passing in close proximity shows a slight decreasing trend whereas for Samoa and Tonga there is a slight increasing trend (note, these trends are not statistically significant at the 95% confidence level). Table 2 shows the meteorological parameters of severe TCs passing within 50 km of the FSTV region, and the respective tracks of those TCs are shown in Fig. 4(a-d). Of all TCs during the period 1970-2018, Winston was the most severe TC in the region with a record minimum seal level pressure of 884 hPa and 10-min sustained windspeed of 278 km h −1 which directly impacted Fiji on the 20th of February 2016. TC Pam, with a minimum sea level pressure of 896 hPa and 10-min sustained windspeed of 250 km h −1 , was the most severe storm that has impacted Vanuatu on the 13th of March 2015. TC Ofa, with a minimum pressure of 925 hPa, was the most severe TC to affect Samoa on the 4th of February 1990 whereas Gita (with a minimum pressure of 927 hPa and 10-min sustained windspeed of 204 km h −1 at some point in its lifetime) was the most severe storm to affect Tonga on the 12th of February 2018. All of these TCs had significant impact over the respective regions as will be shown in the following sections. Table 3 shows the estimated and normalised losses, adjusted to the 2016 values, associated with severe TCs (listed in Table 2) impacting the FSTV region. The estimated losses show large temporal differences, even after taking into consideration inflation, wealth and population factors. Losses (i.e. inflation corrected) due to the most recent TCs are very high over some of the nations. Over Fiji, losses due to TC Winston in 2016 amounted to FJD $1.990 billion compared with severe TCs Oscar (1983) and Kina (1993) where the losses were FJD 0.649 billion and FJD 0.513 billion respectively; clearly, the loss associated with TC Winston was over three times greater than the other two severe events. The total loss due to TCs in the last decade (i.e. 2010-2019) is also significantly larger than that in the previous decades: FJD 2.363 billion (2010-2019); FJD 0.294 billion (2000-2009); FJD 0.6934 billion (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999); 0.884 billion (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989); and FJD 0.350 billion (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979). Economics losses over the FSTV region Similarly, based on the data analysed here, the highest loss for Tonga was due to TC Gita in 2018 (~ TOP 0.317 billion) followed by TC Waka in 2001 (TOP 0.231 billion). Samoa did not experience any major TC-related losses in the last two decades. However, during the period 1990-1999, we saw two back-to-back severe TCs over the country: TC Ofa (1990) and TC Val (1991) that caused damages close to WST 2.3 billion and WST 4.5 billion respectively. TC-associated loss over Vanuatu, similar to Samoa, does not show any trend over the past decades. While loss due to the recent TC Pam (2015) was the largest in the currentyear vatu (VUT 63.20 billion), loss after normalisation was the largest (VUT 95.78 billion) for TC Uma (1987). Following TC Uma and Pam, the major normalised impact losses were 2.62 and 1.91 billion vatu associated with TC Prema (1993) and TC Dani (1999) respectively. These findings in general show that, except for Fiji, clear increasing trends in TC economic losses for other countries are absent. While we acknowledge that the lack of complete economic data for some countries may affect the trends, improved DRR efforts by various sectors may have played a critical role in supressing economic loss and fatalities (see subsequent discussions). Disaster response and fatalities The number of severe TC-related fatalities can provide an indirect measure of a country's level of preparedness and response for a particular event. Table 4 shows the fatalities related to the severe TCs (that are listed in Table 2) impacting the FSTV region. For Fiji, the largest TC-related fatality was 52 during TC Meli (1979), followed by 44 fatalities that were during the passage of severe TC Winston (2016). TC Eric (1985) and TC Kina (1993) recorded total fatalities of 28 and 26 respectively, whereas those associated with other TC events were below 20 deaths. For Samoa, the number of recorded TC-related deaths was mainly below 10, except for the severe TC Evan (2012) that caused 14 deaths in the country. For Tonga, the largest number of fatalities (i.e. 6) was also during the passage of TC Evan (2012), whereas, for Vanuatu, the largest number of TC-related deaths was 50 during the passage of TC Uma (1987), followed by 15 fatalities during TC Pam (2015). Losses in human lives translate to losses in GDP over time. However, economic loss values due to TCs, such as those provided in Table 3, do not take this into consideration (Raghavan and Rajesh 2003). Moreover, trends in fatalities are an excellent indicator of the improvements in the overall DRM and DRR over time. Reasonable metrics (or indices) to measure this are as follows: (i) the ratio of fatality (D y ) to economic loss (NL y ) and (ii) the product of the two (i.e. D y × NL y : Raghavan and Rajesh (2003)). A decreasing trend in the former means fewer deaths per unit of economic loss which would signify improvements in DRM and DRR. In the latter metric, a decreasing trend would signify either a decrease in D y or NL y or both. Table 4 shows the fatality indices related to TCs that are listed in Table 2. Over Fiji, the D NL index shows a decreasing trend that peaked during TC Gavin (1997), and this indicates an overall improvement of DRR over time. The D × NL factor peaks during TC Winston (2016), which could be associated with an exceptionally large impact loss. For Tonga and Vanuatu, both the indices show a decreasing trend, signifying improvements in DRR, Table 4 Fatalities associated with severe TCs listed in Table 2 Year whereas for Samoa, the results are mixed: the D NL index shows no trend but the D × NL shows a decreasing trend, though a larger sample size is needed to make any confident conclusion here. Current responsibilities of NHMS The role of the local NHMS is critical for DRM and DRR, particularly with those associated with TCs and extreme rainfall. The Fiji NHMS (known as the Fiji Meteorological Services or FMS) is also designated as the Regional Specialised Meteorological Centre (RSMC) by the World Meteorological Organization (WMO) under the World Weather Watch Program specialising in TCs. With this designation, the Fiji NHMS is responsible for the provision of 'first-level' information on TCs, such as basic information covering TC's present and forecast positions, as well as movement and intensity in the SWP region, from 0° to − 25°S, and 160°E to 120°W. Nonetheless, the general roles of the individual NHMSs in the region include provision of essential weather, including TC-related services, and climate and hydrological services which we briefly discuss here. Foremost, the NHMSs are responsible for continuous monitoring of weather and providing daily forecasts to the general public, marine and aviation. The FMS in particular is responsible for providing these services to Fiji, the Cook Islands, Kiribati, Nauru, Niue, Tokelau, Tonga and Tuvalu. During TCs, the NHMSs issue Special Weather Bulletins (SWB) every 6 h upon issuance of a TC alert, the frequency of which is usually increased to every 3 h with the upgrade to a TC warning. Together with that, TC threat and forecast maps are issued every 6 h. Alerts for heavy rainfall are also issued if the estimated 24-h rainfall exceeds 100 mm within the next 48 h (but not before 24 h) and these could later be upgraded to a warning if the estimated rainfall exceeds 100 mm within the next 24 h. The NHMSs also have Climate Services Division whose operations include the (1) National Climate Data Centre, and (2) National Climate Monitoring and Prediction Centre. Within the framework of the climate services delivery, the NHMSs are responsible for monitoring of climate variability and change. They also promote the effective use of these data and provide consultative services for planning, research and decision-making processes that are critical for socio-economic development. The Climate Monitoring and Prediction Centre issues regular climate monitoring products, including updates on El Niño-Southern Oscillation (ENSO) status and monthly-toseasonal outlooks for atmospheric variables such as rainfall and air temperature. It also issues a regular ocean outlook which includes information on sea surface temperatures (SSTs), fisheries convergence zone, sea level, coral bleaching and astronomical tide predictions. Specialised monthly-to-seasonal products for sectors such as agriculture (e.g. sugar cane in Fiji), energy (e.g. hydroelectricity), health (e.g. Malaria Early Warning System) and climate data are provided when needed or upon request for decision-makings (e.g. engineering designs within infrastructure departments, planning in the agriculture sector, NDMO and health). Hydrological Services have also been recently introduced in some of the NHMSs (e.g. Fiji) to issue flood advisories. These are mostly based on river-level monitoring through telemetered systems, and the FMS has, in particular, incorporated a flash flood guidance system with the assistance of the WMO. Most of the above services and products are delivered routinely to all sectors, including communities, government, media organisations and other Meteorological Services in the region and the world. Social media platforms and mobile phone applications, together with animations and videos, are increasingly used to disseminate these weather, climate and hydrological information. While these services are publicly available, the NDMO is one of the key specialised users of it. During any anticipated deterioration of weather, climate and hydrological condition, the NHMS provides regular briefings to NDMO, including face-to-face communication and product updates. It is also important to note here that all the regional NHMSs coordinate their scientific and technical programme and activities through the Pacific Meteorological Council (PMC 5 ). Preparedness and disaster response: TC Winston (2016) as a case study To reduce the impacts from TCs, a series of actions is mandatory before, during and after the passage of TCs. Here we use Severe TC Winston (2016) as an example to illustrate the preparedness and response by the NHMS and NDMO during a TC event over the SWP region. TC history and disaster preparedness TC Winston, a category 5 system in the Australian TC intensity scale, made landfall on 20 February 2016 at 0600 UTC over Viti Levu, Fiji, directly affecting the region of Rakiraki (see the track in Fig. 4(e)). The central pressure and 10-min sustained windspeed during the landfall were 886 hPa and 280 km h −1 respectively. Following is a brief history of TC Winston before making landfall over Fiji and the disaster preparatory actions undertaken by the relevant organisations such as the NDMO. The FMS started monitoring tropical disturbance (TD) '09F' on 7 February 2016, which had developed near 8.4°S and 170.6°E (far northwest of Fiji: FMS (2016m)). Early on 11 February, the FMS upgraded the TD to a category 1 TC and named it 'Winston' (FMS 2016g). Later, around 1200 UTC on the same day, Winston intensified into a category 2 TC, as a small, well-defined eye developed within the deepening convection (FMS 2016h). On 12 February, Winston rapidly intensified into a category 3 TC by 0600 UTC and then a category 4 TC by 1200 UTC (FMS 2016i). By 0000 UTC on 15 February, TC Winston reduced in intensity and became a category 2 system (FMS 2016j) before upgrading to a category 3 system at 1800 UTC on the 16th (FMS 2016k). By 18 February, the intensifying system was translating towards Fiji (FMS 2016f). It reached category 5 level at 0600 UTC on the 19th, with 10-min sustained winds reaching 205 km h −1 (FMS 2016l). Around 1800 UTC, Winston passed over the Fijian island of Vanuabalavu with a momentary wind gust of 306 km h −1 (FMS 2016a). TC Winston attained its maximum (sustained) intensity around 0000 UTC on 20 February with 10-min sustained winds of 280 km h −1 and a central pressure of 884 hPa (SPEArTC). It made landfall over Rakiraki around 0600 UTC at peak intensity (SPEArTC) making it the only known category 5 TC (until 2018), to directly impact Fiji, and therefore the most intense storm on record to strike the nation (SPEArTC). It also marked the strongest landfall by any TC in the South Pacific basin (SPEArTC), and one of the strongest landfalls worldwide since the modern era of global records began in 1970 (Liberto 2016). The FMS started issuing TC warnings for the Fiji group on 14 February (FMS 2016c). However, these were called off on 16 February as Winston moved away from the nation (FMS 2016b). TC warnings commenced again on 18 February after Winston started moving towards Fiji for the second time and these warnings were issued for the northern and eastern islands (FMS 2016d). The northern islands in the TC's direct path were placed under severe hurricane warnings on 19 February (FMS 2016e). The NDMO activated, in total, more than 700 evacuation centres throughout the nation (RNZ 2016) and the Republic of Fiji Military Forces (RFMF) were placed on standby for relief efforts (Talei 2016). On the afternoon of 20 February, a state of emergency was declared (Swami 2016), a nationwide curfew was enacted starting at 1800 LT (Singh 2016) and public transportation services were suspended across Viti Levu. Fatalities A total of 44 fatalities resulted as a direct consequence of TC Winston. While there is no difference in fatality based on gender, differences do exist when age and ethnicity factors are considered (Esler et al. 2016). For the age group below 15 years, the fatality is ~ 13%, but it is 50% for the 15-64-year group. For the age group 65 and above, which makes just 4% of the total Fijian population, the mortality is disproportionally high at ~ 37% (Esler et al. 2016). With respect to ethnicity, a larger percentage of fatalities was observed for the iTaukei (first nation) group in comparison with the Indian descendant (Indo-Fijian) group. For the iTaukei group, which comprises 57% of the Fijian population, this figure stands at 97% in comparison to 3% for the Indo-Fijian group that make 37% of the population (Esler et al. 2016). Even after stratifying into regions based on ethnicity, this is still valid. Areas such as the Koro Island, where 21% of deaths occurred, are largely inhibited by iTaukei. On the other hand, in areas which are primarily populated by Indo-Fijians, e.g. Ba (54%), the mortality amongst the iTaukei group was still higher (Esler et al. 2016). This discrepancy in the fatality between the ethnic groups requires further investigation and appropriate measures for future improved TC-related disaster responses. Implications for NDMO and NMHS and their production and delivery of climate services to support decision-making in DRM: lessons learned and future directions While necessary preparatory and disaster response actions, which included early warnings both to decision-makers and to the general public, were undertaken, the resultant 44 deaths (in particular, the high rate of mortality in the iTaukei group) show that there are still some shortcomings in the approach and appropriate actions need to be taken to correct these shortcomings. Some of the actions that could be stipulated to help improve DRR and disaster preparedness are outlined as follows: • Increase public awareness and individual preparedness Since TCs are a common phenomenon in the SWP region, science-based services such as the NHMS could work closely with the Ministry of Education to develop meteorology-related syllabuses for schools. Schools could also organise short visits to the NHMS throughout the academic year to give more insight into what is being taught about meteorology in their curriculum. Schools could also annually invite NHMSs to deliver pres-entations on early warning alerts and potential actions expected from the general public, particularly if there is no meteorology-related curriculum. Moreover, science-based services (e.g. NHMS) and other relevant organisations (e.g. NDMO) could organise workshops discussing key concepts surrounding TCs, DRR, preparedness and response. A short book or pamphlet about TCs in various languages could also help increase public awareness particularly with regard to the impacts of TCs. • Upgrade the current tools used to observe and forecast weather There is a need to improve the equipment, programmes and forecasting tools used to provide weather information. These include, for example, the installation of Automatic Weather Systems (AWSs), sourcing of satellite products, improving and upgrading Numerical Weather Prediction (NWP) models and installation of a network of weather radars. • Home retrofitting and building back better While not associated with NHMS, it is important to mention here that home retrofitting could also aid mitigation, prevention and preparedness. Approximately 30,369 houses, 495 schools and 88 health clinics and medical facilities were damaged or destroyed during the passage of TC Winston. Assessing the photos (e.g. Fuata (2016)) show that some structures have suffered major damages mostly to the roofing and openings (e.g. windows). To mitigate such effects of strong winds, owners could have their buildings assessed by professionals and accordingly retrofit these structures with necessary reinforcing materials (such as hurricane straps and roofing screws). While there is no guarantee that these measures will provide an absolute protection during the passage of a TC, they nonetheless substantially decrease the likelihood of damage during these events compared to when not implementing them. TC decision-support framework in the context of a changing climate Understanding the likely changes in TC characteristics at regional and local scales due to global warming is challenging. This is at least in part due to the lack of long-term TC records in the Pacific for detection and attribution of climate change signals (Knutson et al. 2019) and partially because of the large deficiencies and biases in climate model simulations of regional and local-scale TCs (Wang et al. 2014). Reliable observations of TCs became available only from around the 1970s when satellite monitoring became operational, but these few decades of data are not sufficient to adequately resolve climate change signals particularly in the presence of large climate variability across and between decades (Klotzbach and Landsea 2015; Moon et al. 2019). Moreover, global climate models are not typically run at scales needed to resolve changes in TC characteristics (particularly intensity) at island-scale, substantially limiting our ability to understand the effects of climate change on TCs over small island countries Fig. 5 Schematic representation of the linkages between natural climate variability, human-induced global warming and tropical cyclones. Note that this diagram is not exclusive and does not quantify the changes, but instead demonstrates how different climatic factors may interact to affect TC characteristics and associated impacts over the SWP region ( 1 Vecchi et al. 2006;2 Yeh et al. 2009;3 Power and Kociuba 2011;4 Kim and Yu 2012;5 Sugi et al. 2012;6 Sugi and Yoshimura 2012;7 Tokinaga et al. 2012;8 Church et al. 2013;9 Hartmann et al. 2013; 10 Tory et al. 2013;11 Woodruff et al. 2013;12 Cai et al. 2014;13 Kossin et al. 2014;14 Lucas et al. 2014;15 Walsh et al. 2016;16 Chand et al. 2017;17 Taupo and Noy 2017;18 Chand 2018;19 Kossin 2018;20 Sharmila and Walsh 2018;21 Andrew et al. 2019;22 Chand et al. 2020) Climatic Change (2022 in the SWP region. Nevertheless, from past literature (Vecchi et al. 2006;Power and Kociuba 2011;Tokinaga et al. 2012;Tory et al. 2013;Woodruff et al. 2013;Kossin et al. 2014;Lucas et al. 2014;Walsh et al. 2016;Chand et al. 2017 Lee et al. 2021), trends in observation records, climate model simulations and theoretical understanding, we have a good understanding of how various climatic conditions may interact to affect TCs and associated impacts in the SWP region as depicted in Fig. 5. It is important to note here that this schematic does not make any attempt to quantify the changes, but instead demonstrates how different climatic factors may interact to affect TC characteristics and associated impacts in the SWP region. The level of risk associated with a TC event is already very large across the SWP region as shown earlier, and TC-induced risks are likely to exacerbate further as a consequence of global warming, particularly during future-climate El Niño periods. The increasing threats from storm surge and coastal flooding due to sea-level rise should be of concern for atolls and islands, particularly those located within the TC-impact zones (e.g. Cook Islands, Fiji, Samoa, Solomon Islands, Tonga and Vanuatu). Such threats can also have direct consequences on long-term food and water security for the atoll and island communities, for example through intrusion of saltwater into freshwater storages and cultivable land, creating irrecoverable damages to the livelihood, and in some cases through permanent loss of islets (Hisabayashi et al. 2018). Climate change adaptation planning for managing future TC-induced impacts can be very complicated for highly vulnerable atoll and island countries in the SWP region. However, some recent events, such as those associated with TC Pam and TC Winston, can present a window of opportunity to trigger transformational changes in the communities' approach to TC-induced risks and impacts. These transformational changes can range from developing quintessential strategies for mitigating immediate impacts of TCs-such as emergency shelters and effective evacuation procedures-to long-term technological and engineering solutions-such as seawalls, levees, desalination plants and storm water harvesting-to mitigate effects of rising sea-level and storm surges, and their potential impacts on food and water security. Recommendations Pacific Island Countries and Territories (PICTs) are taking collective action towards developing a regional TC preparedness and response framework via the Pacific Meteorological Council (PMC). The PMC is a specialised body of the Secretariat of the Pacific Regional Environment Programme (SPREP) comprised of the directors or heads of PICT meteorological services that was established in 2011 to facilitate and coordinate the scientific and technical programme and activities of the Pacific island region's meteorological services. At the most recent biennial PMC meeting held in Samoa in 2019, the Pacific Islands Climate Services (PICS) Panel 6 of the PMC identified the need to work towards an agreement between PICTs' National Meteorological and Hydrological Services (NMHS) and regional providers (i.e. Fiji Meteorological Service, National Institute of Water and Atmospheric Research (NIWA) of New Zealand and the Australian Bureau of Meteorology) on a more coordinated release process and timeline for SWP TC outlooks. The PMC identified two priority opportunities for improving seasonal TC preparedness communication; (i) regional providers were prone to disseminate regional and country TC outlooks before NMHS have had an opportunity to prepare their own TC outlook statements and media releases, and (ii) each provider uses a different methodology and different boundaries leading to different outlooks and thus some confusion amongst users. To address the first issue, PMC has worked with regional providers to make embargoed draft TC outlooks available to NMHS a week prior to release, enabling NMHS to tailor local language media releases that facilitate local understanding and uptake of regional TC outlook products. On the second issue, PICS has released good practice guidelines (PICS Panel 2020) for producers and users of SWP TC outlooks that include recommendations to prioritise user needs over scientific interest and to engage with stakeholders to determine their capacity to interpret scientific and multi-model information. To further enhance communication of seasonal TC preparedness, the WMO Regional Association Five (RA-V) Pacific Regional Climate Centre (RCC) Network 7 has published an online ENSO tracker (RCC 2020) product that gathers and summarises the status of ENSO according to different global institutions and combined with other educational and awareness products, like the TC outlooks, can help PICTs' stakeholders better understand their seasonal TC risk. The Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) together with SPREP and other partners have also published a Pacific-themed, animated video called the 'Adventures of the Climate Crab' (CSIRO 2013) in multiple Pacific languages that explain to a general audience ENSO and its effects on PICTs' Seasonal TC risk and how to prepare. It is important to ensure that such products and services are culturally appropriate, understandable by the intended audience, and tailored to stakeholders' needs. While NMHS are responsible for transmitting seasonal and event-scale TC information, many Pacific communities have difficulty understanding the information provided, have delayed access to weather and climate information, have access but mistrust the accuracy of the information by NMHS, or have no access to any information provided by NMHS due to remoteness or isolation of communities (Lui et al. 2017;Plotz et al. 2017;Chambers et al. 2019). Thus, many remote Pacific communities rely on weather and climate forecasts based on traditional knowledge (TK) alone or in combination with contemporary (NMHS) forecasts (Magee et al. 2016;Chambers et al. 2019). When category 5 TC Zoe (2002), the second most intense TC ever recorded in the Southern Hemisphere at that time, devastated the Solomon Islands outliers of Tikopia and Anuta in 2002, islanders without access to NMHS forecasts or warnings relied on TK to survive the disaster without the loss of a single life (France-Presse 2003; SPREP 2019). Tikopian Elders were able to use TK of unusual movements of ocean currents and bird behaviour to forecast the coming TC and evacuate the community to caves where they sheltered safely until TC Zoe passed (SPREP 2019). When designing a TC decision-support framework for PICTs, the incorporation of traditional forecast methods into contemporary forecast systems can lead to forecasts that are locally relevant and better trusted by the users, which in turn could significantly improve the communication and application of climate and weather information, especially to remote communities (Chand et al. 2014;Plotz et al. 2017). Thus, Plotz et al. (2017) recommend that such a framework comprises of four main decision points: (1) consideration of the level of involvement of traditional-knowledge experts or the community that is required; (2) existing levels of traditional knowledge of climate forecasting and its level of cultural sensitivity; (3) the availability of long-term data-both traditional-knowledge and contemporary-forecast components; and (4) the level of resourcing available. There is no one-size-fits-all approach when it comes to developing climate change adaptation and mitigation strategies, particularly for highly vulnerable communities where TCinduced impacts are of a major concern. Drawing on experiences from past TC events as analogies to what may happen in the warming climate is imperative. Such activity should emphasise a 'bottom-up' approach (Pielke Sr et al. 2012) that requires local community and stakeholder-led discussion to first determine and evaluate the level of TC-induced threats to critical local factors like food and water security, and then develop relevant and relatable adaptation and mitigation strategies for each factor. Moreover, strategy development for adaptation planning processes needs to be an iterative process to take into account updated science on climate extremes. Adaptation decisions are not static but need to be themselves 'adaptive' to new information and knowledge (Morioka et al. 2020). Moving forward, it is critical that relevant national agencies (such as NHMS, NDMO and Department of Climate Change) understand the differences in the likelihood of TC occurrences between different climatic conditions (e.g. El Niño and La Niña periods) and clearly communicate this to sector stakeholders and the wider community through their information products and awareness-raising. Such approaches are believed to be a more inclusive way of assessing impacts of TCs and adopting more effective adaptation methodologies to deal with the complexity of potential extreme events affecting the SWP region. Summary Normalised economic losses (based on wealth and population) and fatalities during the passage of TCs in close proximity to the Fiji, Samoa, Tonga and Vanuatu (FSTV) region for the period 1970-2018 have been investigated in this study. An increasing trend in normalised economic loss due to TCs over the study period is not evident for most nations, except for Fiji where TC Winston in 2016 caused the most damage (USD 0.9 Billion) to the nation. Moreover, TC Val (1991), Gita (2018) and Uma (1987) caused the most extensive economic losses over Samoa (USD 1.782 Billion), Tonga (USD 0.146 Billion) and Vanuatu (USD 0.922 Billion) respectively. Hence, TC Val (1991) could be considered the costliest TC over the FSTV region. In terms of fatalities, the largest number of severe TCassociated deaths has occurred over Fiji and Vanuatu at 52 and 50 during TC Meli (1979) and TC Uma (1987) respectively. For Samoa and Tonga, the highest numbers of fatalities of 14 and 6 were due to TC Evan (2018) and TC Isaac (1982) respectively. Trends in fatalities are an important indicator of the overall improvements in the DRR and DRM over time, and this was examined here using the ratio of fatality to normalised economic loss. A decreasing trend in this index is generally evident, which signifies that there have been improvements in DRR and DRM over time in the FSTV region. TC Winston has been used as a case study to illustrate the disaster preparedness and response during a TC event over the FSTV region and future recommendations have been suggested to reduce impacts during the passage of TCs. These recommendations include (i) increasing public awareness and individual preparedness, (ii) upgrading the current tools used to observe and forecast weather and (iii) home retrofitting. The discrepancy in fatalities between ethnic groups during TC Winston also requires further investigation and appropriate measures for future disaster TC responses. Finally, we discussed the collective action taken by Pacific Island Countries and Territories (PICTs) towards developing a regional TC preparedness and response framework via the Pacific Meteorological Council (PMC). In particular, when designing a TC decisionsupport framework for PICTs, the incorporation of traditional forecast methods into contemporary forecast systems can lead to forecasts that are locally relevant and better trusted by the users, which in turn could significantly improve the communication and application of climate and weather information, especially to remote communities. Such a framework should comprise four main decision points: (1) consideration of the level of involvement of traditional-knowledge experts or the community that is required, (2) existing levels of traditional knowledge of climate forecasting and its level of cultural sensitivity, (3) the availability of long-term data-both traditional-knowledge and contemporary-forecast components-and (4) the level of resourcing available.
2022-06-23T13:33:29.542Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "beb0f1ffefa691e22c9e47fc829d8d396709773d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10584-022-03391-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "beb0f1ffefa691e22c9e47fc829d8d396709773d", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [] }
146252671
pes2o/s2orc
v3-fos-license
Cold, clean and green: improving the efficiency and environmental impact of a cryogenic expander The Dearman Engine is a cryogenic expander, utilised in the case of combined power and cooling, currently applied in truck refrigeration units. As the temperatures involved (~30 °C) are significantly lower than those experienced in an internal combustion engine, there is scope for material replacement. These operating conditions open up the opportunity to employ polymers and exploit their tribological properties. A composite is often used to combine preferable properties of each material. It has been hypothesised that a composite could be replaced by a cheaper alternative: a laminate. Five materials were tested for friction and wear: PEEK, PTFE, PTFE-PTFE laminate, PEEK-PTFE composite and PEEK-PTFE laminate. The surfaces were also examined under an SEM/EDS post test in order to determine any defects and to detect any transfer layers. The results showed that the friction and wear for the PTFE-PTFE laminate were similar to that of pure PTFE and so the bonding of the material had no impact on the overall result. This was also the case in the PEEK-PTFE composite and laminate. The major difference was the presence of a transfer layer in the PEEK-PTFE laminate that was not present in the composite. These results suggest that lamination is a suitable alternative that warrants further investigation. Introduction With increasing restrictions on emissions and consumer awareness of environmental impact, there is a strong drive to reduce carbon footprint. One of the largest contributors to inner city pollution is refrigeration of food in the back of lorries destined for supermarkets [1]. This is an application where a combination of power and cooling is required. Liquid nitrogen is a waste product produced by various industries and with its high thermal expansion index during the boiling process, could have many potential applications. The Dearman truck refrigeration unit (TRU) provides clean power and cooling powered by this waste resource. The TRU consists of a single piston engine connected to a vapour compression refrigeration unit. The engine piston is driven by either liquid nitrogen or liquid air, which expands 710 times when transitioning from liquid to gas [2]. The use of a heat exchanger fluid significantly increases thermal efficiency without the need for re-heating [3]. Polymers are rarely used in typical combustion engines due to the high temperatures involved [4] meaning that their weight and power saving advantages can not be exploited. However, the much lower operating temperatures of the Dearman engine (∼30 • C) mean that this technology has many potential applications and one key material of interest is poly-ether-ether-ketone (PEEK) [5]. One of the major benefits of using polymers is the ability to combine two polymers to utilise the beneficial properties of the materials used [6]. Traditionally a stiff polymer is used in combination with a lubricious polymer in order to provide a component that is durable and reduces the coefficient of friction [7]. This combination tends to be produced by embedding the softer polymer into a stiffer matrix material. These tend to be more expensive compared to unreinforced polymers and the interfacial boundary between materials can adversely affect mechanical properties. PTFE-PEEK composites have been investigated in a variety of different ratios. However when the PTFE percentage is larger than that of the PEEK, they do not perform favourably an undesirable phenomenon known as grooving occurs [6]. The tribological film produced during grooving also has a lower thickness resulting in lower lubricity. Although composites are widely used to obtain properties of multiple materials, there is some evidence to suggest that using laminated materials can provide comparable friction with composites. Qi et al. investigated the tribological performance at elevated temperatures using a pin on laminated aluminium oxide with molybdenum, the friction was 60% lower than the monolithic material [8]. However, at ambient temperature the AlO 3 /Mo composite gave better results than the laminate. This paper aims to investigate the impact of laminated PTFE with PEEK to determine if its tribological performance is better than that of a composite of the same PTFE-to-PEEK ratio. Materials and Methods Five different material combinations were tested: PEEK, PTFE, PTFE-PTFE Laminate (PTFE L), 20% PTFE -PEEK Laminate (PTFE-PEEK L) and 20% PTFE -PEEK Composite (PTFE-PEEK C). All the materials were sourced from Direct Plastics (UK) other than the PTFE-PEEK C which was supplied by Solvay (Atlanta, Georgia, USA). These were machined to the dimensions as shown in Figure 1. Where the materials have been bonded for testing, the surfaces were initially primed with a polyolefin primer (Loctite 770) and then bonded with ethylcyanoacrylate (Loctite 496). PTFE-PTFE L was tested to investigate the effect of the bonding process on the sample. In order to ensure the samples were flat they were polished with an abrasive paper before they were run. The samples were tested on a TE77 Reciprocating Tribometer (Phoenix Tribology, Hampshire, UK) in order to measure the friction and to generate wear. A bespoke upper specimen holder was produced for the rig to clamp the polymers in place. Table 1 shows the testing parameters used, they were selected to simulate the worst tribological condition in the Dearman Engine, start up. The upper specimens were weighed pre-and post-test in order to quantify wear. The samples were then also examined under a scanning electron microscope (SEM) and analysed using energy dispersive X-ray spectroscopy (EDS). Figure 2a shows the median coefficient of friction as measured by the TE77. It can be seen that the PEEK sample had a significantly higher coefficient of friction than the rest of the samples tested. The rest of the samples had a very similar coefficient of friction and a Mann-Whitney U test showed that there was no significant difference between the four datasets at a 95% confidence interval. This is an exciting result for two major reasons. One is that the pure PTFE sample and the PTFE-PTFE sample having no significant difference suggests that the process of bonding two polymers together has no effect on the coefficient of friction under these conditions. Another interesting result is that a 20% PTFE-PEEK mix produces a coefficient of friction statistically similar to a pure PTFE sample. Figure 2b shows the logarithmic graviametric wear percentage after 1 h of testing. This graph shows why materials such as PTFE are commonly used as composites as the wear rate for PTFE is significantly higher than any other than the any other of the materials tested. As with the friction, it can be shown that the bonding has no significant effect on the wear. This can be surmised due to the fact that the pure PTFE and bonded PTFE as well as the PTFE PEEK composite and laminate both had no significant difference in their wear rates. Results and Discussion There is a wider spread of data in the PTFE-PEEK L cases, this error may have been caused by inconsistencies in the material preparation. Samples not being perfectly flat or layers being perfectly parallel may cause variation in the friction and wear. These errors are not of major concern as the purpose of the paper is to investigate if the process will worsen the performance which is not the case here. Figure 3a, PEEK showed very little damage to the surface. There were areas of fatigue and areas of scoring. The scoring was due to the scratching of the stiffer aluminium into the PEEK surface. The areas of fatigue are small in comparison to other samples and after one hour did not appear to produce any pitting within the surface. The composite surface as shown in Figure 3b looks remarkably similar to the bulk of PEEK again scoring and fatigue cracks are seen on the surface. There are areas in which the fatigue cracks have led to a small amount of pitting. Figure 3c shows an overall view of a laminated upper specimen it can be seen that the surface has been subject to a large number of imperfections. Only one of the bonding lines can be seen clearly as on the other bonding line the PTFE is smeared across the top of the surface. Around the bonding surface, there does not appear to be an increase in the intensity or frequency of the fatigue cracks the scene surface. The bonding line itself does not appear to be damaged and would suggest that the adhesive was suitable for the application and did not fail during testing. Again scoring can be seen on the surface parallel to the direction of reciprocation. It is difficult to separate damage produced during sample preparation and damage ascertained during testing, however, all of the samples were treated in the same manner and so it is possible to compare between samples. Figure 4a shows a bar chart of the intensity of the Fions detected by the EDS on the lower aluminium specimens. The scan was taken in the middle of the wear track where the velocity was highest, shown in Figure 4b. It can be seen the pure PEEK and the PTFE-PEEK-C demonstrated either low or no intensity of Fions. This is expected for the PEEK as there is no material present that contains F -. However comparing the PTFE-PEEK-C and PTFE-PEEK-L it can be seen that significantly more PTFE is transferred onto the aluminium from the laminate. This transfer layer has been shown by Dearn et al. to promote a low coefficient of friction and potentially a more stable contact [9]. However, in both a transfer of PTFE occurs. These results would suggest that the hypothesis that laminating makes no difference to the tribological properties of the sample compared to a composite is in part true. Whilst the friction and wear are not effected the mechanism by which the PTFE spreads across the surface appears to be different for the laminate and composite PEEK and PTFE samples. This is difficult to examine but in a composite, PTFE is spread evenly assuming that the composite is homogeneous and therefore the PTFE is supplied to the contact from an evenly distributed source. In the laminate, PTFE is at either end of the sample and so is very easy to identify where the PTFE has come from. As it has been shown that PTFE has a higher wear rate than the PEEK this would suggest that the edges of a laminated sample would wear quicker than the centre. One solution to this is to reduce the contact pressure incident on the PTFE by reducing the thickness of the laminates and increasing number of laminated layers. Conclusion This paper described a study, aiming to ascertain whether combining the polymers in a laminated material would produce similar tribological properties to composite polymers. A 20% PEEK PTFE composite under the conditions tested demonstrated a wear resistance consistent with PEEK and a coefficient of friction consistent with PTFE. It was shown that bonding PTFE together changes neither the friction or wear properties of the material. Most significantly it was also shown that a 20% PTFE PEEK laminate produced statistically similar results to the equivalent composite. A comparison of the surfaces demonstrated that smearing of the PTFE across the surface occurred in the laminate material. This is important as a PTFE layer will reduce the coefficient of friction. Similar levels of fatigue cracking and scoring occurred between the composite and the laminate. The PTFE encompassed within the laminate may have worn at a higher rate than the PEEK due to it being located on the leading edges of the samples. However, increasing the number of In conclusion, the presence of a stiff component and a lubricious component within a specimen is more significant than whether the PTFE is evenly distributed across the surface or not. This leads to the conclusion that is possible to use lamination as an alternative to composite materials.
2019-05-07T14:12:01.455Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "42b2f113261e4617db4e91489074e8b639a51f49", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/502/1/012157", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fde7f979850d2ee3c878898b8c01fe63a80f070b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
7074299
pes2o/s2orc
v3-fos-license
Constraints on Holographic Dark Energy from Latest Supernovae, Galaxy Clustering, and Cosmic Microwave Background Anisotropy Observations The holographic dark energy model is proposed by Li as an attempt for probing the nature of dark energy within the framework of quantum gravity. The main characteristic of holographic dark energy is governed by a numerical parameter $c$ in the model. The parameter $c$ can only be determined by observations. Thus, in order to characterize the evolving feature of dark energy and to predict the fate of the universe, it is of extraordinary importance to constrain the parameter $c$ by using the currently available observational data. In this paper, we derive constraints on the holographic dark energy model from the latest observational data including the gold sample of 182 Type Ia supernovae (SNIa), the shift parameter of the cosmic microwave background (CMB) given by the three-year {\it Wilkinson Microwave Anisotropy Probe} ({\it WMAP}) observations, and the baryon acoustic oscillation (BAO) measurement from the Sloan Digital Sky Survey (SDSS). The joint analysis gives the fit results in 1-$\sigma$: $c=0.91^{+0.26}_{-0.18}$ and $\Omega_{\rm m0}=0.29\pm 0.03$. That is to say, though the possibility of $c<1$ is more favored, the possibility of $c>1$ can not be excluded in one-sigma error range, which is somewhat different from the result derived from previous investigations using earlier data. So, according to the new data, the evidence for the quintom feature in the holographic dark energy model is not as strong as before. I. INTRODUCTION Observations of Type Ia supernovae (SNIa) indicate that the universe is experiencing an accelerating expansion at the present stage [1,2]. This cosmic acceleration has also been confirmed by observations of large scale structure (LSS) [3,4] and measurements of the cosmic microwave background (CMB) anisotropy [5,6]. The cause for this cosmic acceleration is usually referred to as "dark energy", a mysterious exotic matter with large enough negative pressure, whose energy density has been a dominative power of the universe (for reviews see e.g. [7,8,9,10,11,12]). The astrophysical feature of dark energy is that it remains unclustered at all scales where gravitational clustering of baryons and nonbaryonic cold dark matter can be seen. Its gravity effect is shown as a repulsive force so as to make the expansion of the universe accelerate when its energy density becomes dominative power of the universe. The combined analysis of cosmological observations suggests that the universe is spatially flat, and consists of about 70% dark energy, 30% dust matter (cold dark matter plus baryons), and negligible radiation. Although we can affirm that the ultimate fate of the universe is determined by the feature of dark energy, the nature of dark energy as well as its cosmological origin remain enigmatic at present. However, we still can propose some candidates to interpret or describe the properties of dark energy. The most obvious theoretical candidate of dark energy is the cosmological constant λ [13] which always suffers from the "fine-tuning" and "cosmic coincidence" puzzles. Theorists have made lots of efforts to try to resolve the cosmological constant problem, but all these efforts were turned out to be unsuccessful. Numerous other candidates for dark energy have also been proposed in the literature, such as an evolving canonical scalar field [14,15,16,17,18,19] usually referred to as quintessence, the phantom energy [20,21,22] with an equation-of-state smaller than −1 violating the weak energy condition, the quintom energy [23,24,25,26,27,28] with an equation-of-state evolving across −1, the hessence model [29,30,31], the Chaplygin gas model [32,33,34], and so forth. Actually, the dark energy problem may be in principle a problem belongs to quantum gravity domain [35]. Another promising model for dark energy, the holographic dark energy model, was proposed by Li [36] from some considerations of fundamental principle in the quantum gravity. It is well known that the holographic principle is an important result of the recent researches for exploring the quantum gravity or string theory [37,38]. This principle is enlightened by investigations of the quantum property of black holes. Roughly speaking, in a quantum gravity system, the conventional local quantum field theory will break down. The reason is rather simple: For a quantum gravity system, the conventional local quantum field theory contains too many degrees of freedom, and such many degrees of freedom will lead to the formation of black hole so as to break down the effectiveness of the quantum field theory. For an effective field theory in a box of size L, with UV cut-off Λ the entropy S scales extensively, S ∼ L 3 Λ 3 . However, the peculiar thermodynamics of black hole [39,40,41,42,43,44] has led Bekenstein to postulate that the maximum entropy in a box of volume L 3 behaves nonextensively, growing only as the area of the box, i.e. there is a so-called Bekenstein entropy bound, S ≤ S BH ≡ πM 2 Pl L 2 . This nonextensive scaling suggests that quantum field theory breaks down in large volume. To reconcile this breakdown with the success of local quantum field theory in describing observed particle phenomenology, Cohen et al. [45] proposed a more restrictive bound -the energy bound. They pointed out that in quantum field theory a short distance (UV) cut-off is related to a long distance (IR) cutoff due to the limit set by forming a black hole. In other words, if the quantum zero-point energy density ρ vac is relevant to a UV cut-off, the total energy of the whole system with size L should not exceed the mass of a black hole of the same size, thus we have L 3 ρ vac ≤ LM 2 Pl . This means that the maximum entropy is in order of S 3/4 BH . When we take the whole universe into account, the vacuum energy related to this holographic principle [37,38] is viewed as dark energy, usually dubbed holographic dark energy (its density is denoted as ρ de hereafter). The largest IR cut-off L is chosen by saturating the inequality so that we get the holographic dark energy density where c is a numerical constant, and M Pl ≡ 1/ √ 8πG is the reduced Planck mass. If we take L as the size of the current universe, for instance the Hubble radius H −1 , then the dark energy density will be close to the observational result. However, Hsu [46] pointed out that this yields a wrong equation of state for dark energy. Li [36] subsequently proposed that the IR cut-off L should be taken as the size of the future event horizon Then the problem can be solved nicely and the holographic dark energy model can thus be constructed successfully. The holographic dark energy scenario may provide simultaneously natural solutions to both dark energy problems as demonstrated in [36]. For extensive studies on the holographic dark energy model see e.g. [47,48,49,50,51,52,53,54,55,56,57,58,59]. The holographic dark energy model has been tested and constrained by various astronomical observations, such as SNIa [60], CMB [61,62,63], combination of SNIa, CMB and LSS [64], the X-ray gas mass fraction of galaxy clusters [65], and the differential ages of passively evolving galaxies [66]. Recently, the three-year data of Wilkinson Microwave Anisotropy Probe (WMAP) observations [67] were announced. Moreover, Riess et al. [68] lately released the up-to-date 182 "gold" data of SNIa from various sources analyzed in a consistent and robust mannor with reduced calibration errors arising from systematics. This paper aims at placing new observational constraints on the holographic dark energy model by using the gold sample of 182 SNIa compiled by Riess et al. [68], the CMB shift parameter derived from three-year WMAP observations [69], and the baryon acoustic oscillations detected in the large-scale correlation function of Sloan Digital Sky Survey (SDSS) luminous red galaxies [70]. This paper is organized as follows: In section II we discuss the basic characteristics of the holographic dark energy model. In section III, we perform constraints on the holographic dark energy model by using the up-to-date observational datasets. Finally, we give the concluding remarks in section IV. II. THE MODEL OF HOLOGRAPHIC DARK ENERGY In this section, we shall review the holographic dark energy model briefly and discuss the basic characteristics of this model. Now let us consider a spatially flat Friedmann-Robertson-Walker (FRW) universe with matter component ρ m (including both baryon matter and cold dark matter) and holographic dark energy component ρ de , the Friedmann equation reads or equivalently, where z = (1/a) − 1 is the redshift of the universe. Note that we always assume spatial flatness throughout this paper as motivated by inflation. Combining the definition of the holographic dark energy (1) and the definition of the future event horizon (2), we derive We notice that the Friedmann equation (4) implies Substituting (6) into (5), one obtains the following equation where x = ln a. Then taking derivative with respect to x in both sides of the above relation, we get easily the dynamics satisfied by the dark energy, i.e. the differential equation about the fractional density of dark energy, where the prime denotes the derivative with respect to the redshift z. This equation describes behavior of the holographic dark energy completely, and it can be solved exactly [36]. From the energy conservation equation of the dark energy, the equation of state of the dark energy can be given [36] Note that the formula ρ de = Ω de 1−Ω de ρ m0 a −3 and the differential equation of Ω de (8) are used in the second equal sign. It can be seen clearly that the equation of state of the holographic dark energy evolves dynamically and satisfies −(1 Hence, we see clearly that when taking the holographic principle into account the vacuum energy becomes dynamically evolving dark energy. The parameter c plays a significant role in this model. If one takes c = 1, the behavior of the holographic dark energy will be more and more like a cosmological constant with the expansion of the universe, such that ultimately the universe will enter the de Sitter phase in the far future. As is shown in [36], if one puts the parameter Ω de0 = 0.73 into (9), then a definite prediction of this model, w 0 = −0.903, will be given. On the other hand, if c < 1, the holographic dark energy will exhibit appealing behavior that the equation of state crosses the "cosmological-constant boundary" (or "phantom divide") w = −1 during the evolution. This kind of dark energy is referred to as "quintom" [23] which is slightly favored by current observations, see e.g. [71,72,73,74,75,76,77,78]. If c > 1, the equation of state of dark energy will be always larger than −1 such that the universe avoids entering the de Sitter phase and the Big Rip phase. Hence, we see explicitly, the value of c is very important for the holographic dark energy model, which determines the feature of the holographic dark energy as well as the ultimate fate of the universe. For an illustrative example, see Figure 1 in [64], in which the selected evolutions in different c for the equation of state of holographic dark energy are plotted. It is clear to see that the cases in c ≥ 1 always evolve in the region of w ≥ −1, whereas the case of c < 1 behaves as a quintom whose equation of state w crosses the cosmological constant boundary w = −1 during the evolution. It has been shown in previous analyses of observational data [64,65,66] that the holographic dark energy exhibits quintom-like behavior basically within statistical error one sigma. Recently, the up-to-date gold sample of SNIa consists of 182 data was compiled by Riess et al. [68]. It contains 119 points from the previous sample compiled in [79] [68] due to highly uncertain color measurements, high extinction A V > 0.5 and a redshift cut z < 0.0233, to avoid the influence of a possible local "Hubble Bubble", so as to where M is the absolute magnitude which is believed to be constant for all Type Ia supernovae, and the luminosity distance-redshift relation is where L is the absolute luminosity which is a known value for the standard candle SNIa, (4). Note that the dynamical behavior of Ω de is governed by differential equation (8). In order to place constraints on the holographic dark energy model, we perform χ 2 statistics for the model parameters (c, Ω m0 ) and the present Hubble parameter H 0 . For the SNIa analysis, we have −0. 21 and Ω m0 = 0.43 +0.08 −0.14 . We see that the parameter c in 1 σ range, 0.16 < c < 0.93, is smaller than 1, making the holographic dark energy behave as quintom with equation-of-state evolving across w = −1, according to this analysis. On the other hand, the above analysis shows that the SNIa data alone seem not sufficient to constrain the holographic dark energy model strictly. The confidence region of c − Ω m0 plane is rather large, especially for the parameter c. Moreover, it is remarkable that the best fit value of Ω m0 of this model is evidently larger than that of the ΛCDM model. For comparison, we refer to the WMAP result for Ω m0 in ΛCDM model: Ω m0 = 0.24 +0.03 −0.04 [67]. As has been elucidated in [64] that for the holographic dark energy the fit of SNIa data is very sensitive to the Hubble parameter H 0 , so it is very important to find other observational quantities irrelevant to H 0 as a complement to SNIa data. Fortunately, such suitable data can be found in the probes of CMB and LSS. For the CMB data, we use the CMB shift parameter. The CMB shift parameter R is perhaps the least model-independent parameter that can be extracted from CMB data. The shift parameter R is given by [86] R ≡ Ω 1/2 m0 where z CMB = 1089 is the redshift of recombination. The value of the shift parameter R can be determined by three-year integrated WMAP analysis [67], and has been updated by [69] to be 1.70 ± 0.03 independent of the dark energy model. For the LSS data, we use the measurement of the BAO peak in the distribution of SDSS luminous red galaxies (LRGs). The SDSS BAO measurement [70] gives A = 0.469(n s /0.98) −0.35 ± 0.017 (independent of a dark energy model) at z BAO = 0.35, where A is defined as Here the scalar spectral index is taken to be n s = 0.95 as measured by the three-year WMAP data [67]. We notice that both R and A are independent of H 0 ; thus these quantities can provide robust constraint as complement to SNIa data on the holographic dark energy model. We now perform a combined analysis of SNIa, CMB, and LSS on the constraints of the holographic dark energy model. We use the χ 2 statistics where χ 2 SN is given by equation (12) Table I. We also show the best-fit case of SNIa+CMB+LSS analysis on the residual Hubble diagram with respect to the best-fit case of SNIa alone analysis in Figure 1. We see clearly that in the joint analysis the derived value for matter density Ω m0 is very reasonable. In addition, it should be emphasized that what is of importance for this model is the determination of the value of c. In Figure 3 we plot the 1-dimensional likelihood function for c, marginalizing over the other parameters. We notice that the best-fit value of c in this analysis is enhanced to around 0.91. Intriguingly, the range of c in 1-σ error, 0.73 < c < 1.17, is not capable of ruling out the probability of c > 1; this conclusion is somewhat different from those derived from previous investigations using earlier data. In previous work, for instance, [64] and [65], the 1-σ range of c obtained can basically exclude the probability of c > 1 giving rise to the quintessence-like behavior, supporting the quintomlike behavior evidently. Though the present result (in 1-σ error range) from the analysis of the up-to-date observational data does not support the quintom-like feature as strongly as before, the best-fit value (c = 0.91) still exhibits the holographic quintom characteristic. Another problem of concern is that both the SNIa alone analysis and the SNIa + CMB + LSS joint analysis predict a low value of dimensionless Hubble constant h. For the Hubble constant, one of the most reliable results comes from the Hubble Space Telescope Key Project [87]. This group has used the empirical period-luminosity relations for Cepheid variable stars to obtain distances to 31 galaxies, and calibrated a number of secondary distance indicators measured over distances of 400 to 600 Mpc. The result they obtained is h = 0.72 ± 0.08. It is remarkable that, intriguingly, this result is in such good agreement with the result derived and Ω m0 = 0.29±0.03, which are almost the same as the results from without HST data. So, we next take another way of incorporating the HST prior, 0.64 < h < 0.80, into account, in the data analysis. When considering this prior, the confidence level contours get shrinkage and left-shift in the c − Ω m0 parameter-plane, as shown in Figure 5. In this case the fit values for model parameters with one-sigma errors are c = 0.82 +0.11 −0. 13 and Ω m0 = 0.28 +0.03 −0.02 . We see that the holographic dark energy features quintom dark energy within one-sigma range in this case. Furthermore, we also consider a strong HST prior, fixing h = 0.72, in order to see how strongly biased constraints can be derived from a factitious prior on h. We plot the results of this case in Figure 6. The fit values for model parameters with one-sigma errors are c = 0.42 ± 0.05 and Ω m0 = 0.24 +0.02 −0.03 . We find that the shrinkage and The quintom feature with w = −1 crossing characteristic for the holographic dark energy model can be easily seen. IV. CONCLUDING REMARKS The cosmic acceleration observed by distance-redshift relation measurement of SNIa strongly supports the existence of dark energy. The fantastic physical property of dark energy not only drives the current cosmic acceleration, but also determines the ultimate fate of the universe. However, hitherto, the nature of dark energy as well as its cosmological origin still remain enigmatic for us. Though the underlying theory of dark energy is still far beyond our knowledge, it is guessed that the quantum gravity theory shall play a significant role in resolving the dark energy enigma. The holographic dark energy model is proposed as an attempt for probing the nature of dark energy within the framework of quantum gravity, i.e. it is based upon an important fundamental principle of quantum gravity -holographic principle, so it possesses some significant features of an underlying theory of dark energy. The main characteristic of holographic dark energy is governed by a numerical parameter c in the model. This parameter, c, can only be determined by observations. Hence, in order to characterize the evolving feature of dark energy and to predict the fate of the universe, it is of extraordinary importance to constrain the parameter c by using the currently available observational data. In this paper, we have analyzed the holographic dark energy model by using the up-todate gold SNIa sample, combined with the CMB and LSS data. Since the SNIa data are sensitive to the Hubble constant H 0 , while the shift parameter in CMB and the parameter in the BAO are irrelevant to the Hubble parameter, the combination of these datasets leads to strong constraints on the model parameters, as shown in Figure 2. The joint analysis indicates that, though the possibility of c < 1 is more favored, the possibility of c > 1 can not be excluded in one-sigma error range, which is somewhat different from the result derived from previous investigations using earlier data (such as [64], in which the result of c < 1 is basically favored in 1-σ range). That is to say, according to the new data, the evidence for the quintom feature in the holographic dark energy model is not as strong as before. However, when considering the HST prior, 0.64 < h < 0.80, the quintom-like behavior can be supported in one-sigma error range, as shown in Figure 5. On the whole, the current observational data have no ability to constrain the parameters in the holographic dark energy model on a high precision level. We expect that the future high-precision observations such as the SuperNova/Acceleration Project (SNAP) will be capable of determining the value of c exactly and thus revealing the property of the holographic dark energy.
2014-10-01T00:00:00.000Z
2007-01-14T00:00:00.000
{ "year": 2007, "sha1": "0c3f7c658b8daa2f1c528d3849518b43cce9b6a4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0506310v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0c3f7c658b8daa2f1c528d3849518b43cce9b6a4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252842543
pes2o/s2orc
v3-fos-license
Motion Sensors for Knee Angle Recognition in Muscle Rehabilitation Solutions The progressive loss of functional capacity due to aging is a serious problem that can compromise human locomotion capacity, requiring the help of an assistant and reducing independence. The NanoStim project aims to develop a system capable of performing treatment with electrostimulation at the patient’s home, reducing the number of consultations. The knee angle is one of the essential attributes in this context, helping understand the patient’s movement during the treatment session. This article presents a wearable system that recognizes the knee angle through IMU sensors. The hardware chosen for the wearables are low cost, including an ESP32 microcontroller and an MPU-6050 sensor. However, this hardware impairs signal accuracy in the multitasking environment expected in rehabilitation treatment. Three optimization filters with algorithmic complexity O(1) were tested to improve the signal’s noise. The complementary filter obtained the best result, presenting an average error of 0.6 degrees and an improvement of 77% in MSE. Furthermore, an interface in the mobile app was developed to respond immediately to the recognized movement. The systems were tested with volunteers in a real environment and could successfully measure the movement performed. In the future, it is planned to use the recognized angle with the electromyography sensor. Introduction Over human life, our body goes through several muscular and hormonal changes. Generally, a healthy person reaches the peak of their strength and muscle mass between 25 and 34 years old. Afterwards, the human body slows down the metabolism, hormonal cycles, and muscle recovery. The intensification of these symptoms is usually reported at 50 years old and can lead to disorders such as sarcopenia or degenerative diseases such as Knee Osteoarthritis (KOA) [1]. The World Health Organization (WHO) reports [2] that the recovery and maintenance of functional capacity are one of the main concerns for healthy aging, especially since the worsening of symptoms caused by pathologies related to aging, such as KOA, can compromise human locomotion capacity, requiring the help of an assistant and reducing independence [3]. In addition, the world's elderly population is growing fast, with 1 billion elderly people living now, which is 2.5 times greater than in 1980. In the search for new treatments that can better fit the needs of elderly patients who suffer from muscular disabilities, the NanoStim project emerges. The NanoStim project aims to reduce the burden on healthcare services by developing a solution that allows electrostimulation treatment to be performed at the patient's home. Nowadays, the treatment with electrostimulation for muscle strengthening is divided into sessions performed in a physiotherapy clinic, requiring two or three visits per week for a session that lasts 40 min to complete. For this treatment at home to be feasible, the architecture of an electronic computercontrolled system was designed, including a wearable component capable of applying an electrostimulation protocol defined by the physician [4]. The wearable technology was chosen due to its unique advantages of instantaneity, flexibility, and the ability to transport sensors easily [5]. Thus, in addition to allowing the treatment to be performed at home, it is also possible to track biophysical and biomechanical signals during a treatment session and use the data acquired to adjust the stimulation protocol considering the particularities of each patient. In order to understand which sensors can bring relevant information to the proposed treatment, a literature review [6] was conducted looking for studies that used biomechanical data to classify the stages of KOA and raise characteristics that contribute to the pathology interpretation. As a result, two sensors were highlighted as the most significant in identifying distinctions between the motion behavior of patients, the Electromyographic (EMG) sensor and the Inertial Measurement Unit (IMU) sensor. EMG sensors can monitor the electrical activity of a muscle during a given movement or activity through electrodes placed on the surface of the skin. This information can be essential for muscle rehabilitation treatment, providing metrics to adjust stimulation parameters relative to muscle effort during a treatment session. In addition, the use of electromyography can be considered common in physiotherapy clinics, reducing the learning curve of professionals in interpreting the proposed treatment. As part of the NanoStim project, the implementation of a wearable system capable of acquiring EMG signals and performing electrostimulation simultaneously can be found in the article [7]. The knee angle is the most significant parameter in the KOA classification, mainly due to the difference in the behavior of the lower limbs in everyday activities, such as walking. The studies conducted reported that the behavior of patients tends to present a similar pattern depending on the stage of the pathology. In addition to the use of cameras, the most used method to acquire the knee angle was through two IMU sensors. The first sensor is positioned on the patient's thigh, and the second is on the shin; thus, the angle of each sensor is correlated to calculate the knee angle. Likewise, the knee angle also becomes relevant in our context, expanding our ability to classify the progress of a treatment based on the difference in behavior recorded over the sessions. In addition, with the streaming data, it is possible to verify if the movement performed by the patient during the treatment is correct and to act if it is not. myHealth is a mobile app for Android designed to offer a technological interface between the patient and the physician in an electrostimulation treatment at home. This application under development was able to apply a stimulation protocol and collect data from the EMG sensor simultaneously using the aforementioned wearable system. More details about the myHealth app and the communication with the wearable system during a treatment session can be seen in article [8]. In order to understand the movement performed during a treatment session, recognizing the knee angle, we propose in this article two approaches that are implemented in the myHealth app. The first approach implements two wearable modules to perform the acquisition of the IMU sensor and transmit the data via Bluetooth Low Energy (BLE). The second approach implements knee angle recognition with streaming data in a mobile application. This present article proposes the following contributions: a The development of a low-cost wearable system capable of acquiring data from an IMU sensor; b Identification of the information needed to calculate the knee angle and the construction of an interface to recognize knee movement in a mobile app; c Characterization and comparison of low-cost computational filters to improve the accuracy of motion sensors. This document contains six more sections. Section 2 discusses related works, presenting similar applications found in the literature; Section 3 describes the proposed solution, including the system architecture, the electrical circuit, and the communication protocol. Section 4 explores the mathematical model used to calculate the angles from the IMU sensors. Section 5 presents the algorithms and tests performed to correct sensor reading errors. Section 6 describes the steps followed to implement knee angle recognition in the mobile application called myHealth. Section 7 reports the main conclusions and future work. Literature Review Currently, it is possible to find comprehensive literature using motion sensors during rehabilitation sessions. In a literature review [9] on the topic focused on technological and clinical advances, evidence is reported that indicates a potential benefit for pathologies such as stroke, movement disorders, knee osteoarthritis, and running injuries. Similar to the objective of the NanoStim project, Sultan [10] carried out a study applying an electrostimulation treatment in patients with KOA through a wearable device and a mobile application. The wearable used was capable of collecting Range Of Motion (ROM) values through two accelerometers. However, the acquired data were not used in the treatment, only stored by the app. After the end of the treatment session, the data were sent to the cloud and made available for the physician to monitor the progress. Although the patient was required to adjust stimulation intensity and treatment duration, the treatment showed an improvement in ROM values and a significant reduction in pain scores. Gait analysis is another way of measuring body movements, body mechanics, and the activity of the muscles. In [11], Milic employed an evaluation of the gait parameters to understand metabolic and mechanical variables. For this study, a tool called Optogait was used. This tool can display all of the collected data in real-time through a software platform and is paired with lateral and sagittal video analysis. In this study, the Optogait system was positioned on a treadmill where the candidate walked, and it was possible to observe the parameters of the gait cycle in real-time. The system also provides feedback regarding movement asymmetries and what can be employed for clinical intervention. The authors demonstrated that the proposed Iso-Efficiency Speeds (IES) method offers the highest performance benefits while lowering or at least not increasing the metabolic cost. Although this study employs a deterministic analysis due to equations for uphill walking gait, the results are concise and in accordance with the literature's desired outputs. Machine Learning (ML) techniques are also being explored to improve the diagnosis using data from IMU sensors. Mezghani [12] creates a dataset from a commercial device capable of recording the knee angle in search of mechanical biomarkers. In this study, it was possible to classify the KOA stages using the Kellgren and Lawrence scale with 85% accuracy. Similarly, Kobsar [13] also uses IMU sensors to classify KOA using ML. This study proposes the creation of a wearable device and tests different positions for gait identification. Although the study did not specifically use the knee angle, it was able to reach 81.7% in the classification. Despite this, in the systematic review [14] on the accuracy of clinical applications using wearable motion sensors, it is reported that it is difficult to estimate the reliability of the studies. This is because many of the studies explored use different and sometimes inadequate methods, making the task of correlating the real advances achieved by this technology inaccurate. In addition, the studies found generally do not offer a detailed explanation of how the wearable system works, especially the integration with sensors. In this line, the study reported by Almeida [15] contributed to our work reported in this article. The authors developed a wearable acquisition system capable of collecting data from the IMU sensor and transporting the data collected via Wi-Fi to a client system developed in Python. The study develops a proof of concept by comparing three correction filters (Complementary, Kalman, and Madgwick) in three different scenarios. The authors made the code and libraries used for the tests performed available on GitHub to make it possible to replicate the wearable system. The study points out Madgwick as the filter with a lower error percentage, followed by the Kalman and Complementary filters. Despite this, the authors processed the data in the cloud and did not consider the computational cost of the filtering algorithms and the amount of data required for transmission. This can be a problem in embedded systems, as processing power is generally very low, and transmission technologies such as Bluetooth Low Energy allow for the exchange of small data packets. Wearable Acquisition System The IMU is a type of wearable technology that can be employed to measure motion biomechanics [16]. The IMU sensor is usually composed of an accelerometer and a gyroscope, both with three axes (x,y,z); it is also possible to find models that include a magnetometer. The metrics collected from accelerometer sensors, such as the magnitude of an acceleration, loading rate, and shock attenuation, are similar to metrics obtained using force plates [17]. When the gyroscope and/or magnetometer sensors in an IMU are used, the acquired results provide information on the kinematics, including segment and joint rotations [18]. One IMU model commonly employed in wearable applications is the MPU-6050. This device offers low power consumption, low cost, and high-performance requirements for smartphones, tablets, and wearable sensors [19]. With its ability to precisely and accurately track user motions, the MPU-6050 allows MotionTracking technology to convert handsets and tablets into powerful 3D intelligent devices that can be employed in health monitoring applications [20]. Furthermore, this IMU device has a three-axis gyroscope, three-axis accelerometer, a Digital Motion Processor TM (DMP), and a dedicated I 2 C sensor bus, all in a small package (4 × 4 × 0.9) mm [19]. Regarding the sensor precision, the MPU-6050 features three 16-bit Analog-to-Digital Converters (ADCs) for digitizing the gyroscope outputs and three 16-bit ADCs for digitizing the accelerometer outputs, which allows it to track both fast and slow motions. In addition, the parts feature a user programmable gyroscope full-scale range of ±250, ±500, ±1000, and ±2000°/s and a user-programmable accelerometer full-scale range of ±2g, ±4g, ±8g, and ±16g [20]. To take advantage of the functionalities available in the MPU-6050, an architecture based on the ESP32 Microcontroller Unit (MCU) was designed, as illustrated in Figure 1. The proposed module comprises a battery, a DC-DC voltage regulator, MCU, and the IMU sensor. The employed source was a conventional 3.7 V 700 mA h lithium-ion battery. The voltage regulator used was the S09 model due to its low cost and output voltage level, which allows an input voltage of (3-15) V, output voltages of 3.3 V/4.2 V/5.0 V/9.0 V/12.0 V, and maximum output current of 0.6 A. The MCU was selected as the ESP32 due to its low cost and the availability of wireless communication protocols (Wi-Fi and BLE), which provide the I 2 C bus to collect the acquired data from the MPU-6050 and sends the collected data through BLE to a mobile application. From the defined scheme, the electronic components were soldered on a universal perforated board and fixed with hot glue in a case developed in a 3D printer. The case illustrated in Figure 2 was designed to protect the electronic components and provide a way to tie the wearable system to a surface, such as a person's thigh. Thus, to keep the case stable in the desired location, it has made two gaps on each side of the case, making it possible to pass clothing elastics and make a knot. ESP32 is an MCU with enough computational power to acquire sensor data at high frequencies since the clock of a standard model, such as ESP32-WROOM-32D, is above 150 MHz. However, the wearable system under development will not only collect data from the IMU sensor, but the software will also be responsible for performing the following tasks simultaneously: (1) collect data from the EMG sensor, (2) receive the stimulation protocol, (3) apply the stimulation, and (4) transport all collected data via BLE to the mobile app. Considering the scenario described, the wearable systems, capable of performing the electrostimulation sessions, Ref. [7] were refactored with the addition of IMU data acquisition. To simulate the designed treatment activities, the refactored software was installed in one of the wearable modules. For the second module, new software was implemented and programmed to only acquire the data from the IMU and transmit it via BLE. With this, it is possible to carry out the tests considering the real scenario expected by the software and the hardware, manifesting resource limitations of the computational power. Given this, the data acquisition of the IMU sensor was implemented in a thread programmed to collect a sample every 8 ms, ideally resulting in a sampling frequency of 125 Hz. However, due to the sharing of processing power with the other tasks, the data collected showed acquisitions of 99-100 Hz. Furthermore, the average time interval between each collection was 9.8 ms, with some peaks above 24 ms. The chart in Figure 3 displays the time difference between each sample collected by the wearable system in an example of an acquisition performed. BLE was chosen for the data transmission resource because it presents a lower power consumption than standard Bluetooth and Wi-Fi, significantly increasing battery life. However, in addition to being expensive in terms of processing power, version 4.2 of the BLE limits the amount of data per packet to 517 bytes. As a result, to send all the raw data acquired by the IMU sensor, it is necessary to transmit at least 200 data packets per second to keep the mobile app synchronized. This is because the raw data includes six variables of 16 bytes for the accelerometer and gyroscope and one more variable of 4 bytes for the time interval, resulting in 100 bytes per sample. Optimally for the proposed solution, it would only be necessary to send the sensor angle in relation to the sagittal plane of the human body. Each processed angle sample requires 4 bytes: 1 byte for the sign, 2 bytes for the integer value, and 1 byte for the mantissa. Thus, it is possible to send up to 125 samples per packet in more than one second of data acquisition. The implemented strategy consisted of sending an IMU data packet every 400 ms, essentially aiming to make the mobile app more synchronized. Each data packet has a vector with three variables of 4 bytes each: the calculation of the roll angle, the value of the X-axis of the gyroscope, and the time interval. Thus, every 400 ms, an average of 40 data samples are generated, resulting in 480 bytes. In this way, it is possible to transmit all the data collected up to that moment, and the maximum would be 43 samples (516 bytes) per packet. To start and finish data collection, the mobile application that can receive the EMG signal during a treatment session [8] was also refactored to receive the IMU data. Thus, during the session, the application processes and stores the biofeedback data in an internal file. When the session ends, the app sends the file to the cloud via an HTTP API, according to the system architecture [4]. Mathematical Model The Mathematical model used in this work is based on the theory of multibody system dynamics, as presented by Olinski et al. [21]. In this approach, the orientation of a body in space is given by the orientation of a local frame attached to the body with respect to a reference coordinate system ( Figure 4). Considering the particular case in which the orientation changes occur in a specific plane, as presented in Figure 5a, the mapping of frames with respect to a reference frame can be represented by a single rotation from a reference to another (Figure 5b). To map a frame worth in respect to another in the case of rotations in the YZ-plane of an angle Φ j around the x-axis, as presented in Figure 5, the Euler angles are determined by applying the linear mapping successively from rotation matrices (R Φ1 and R Φ2 ) in each of these spaces. The rotation matrix is given by Equation (1), where j = 1 or j = 2: Considering the sagittal plane as the plane of reference for the movements, once the abduction/adduction angles are neglected (Figure 6), the orientation of the leg's frame and the thigh's frame, both concerning the inertial coordinate system, are represented as rotations in the referred plane. Once the monitored movement happens in the sagittal plane, the knee angle is given by the difference between the thigh and leg angles Φ knee = Φ 2 − Φ 1 . Once the IMU's data are measured with respect to the inertial coordinate system, the attachment of an IMU to each body permits the determination of their orientation naturally with respect to the inertial system, and the difference between them gives the knee angle. Acquisition Optimization As described in Section 3, IMU sensors such as the MPU-6050 model can capture a given movement with reasonable accuracy and sensitivity. However, the performance presented in the real environment shows inconsistency and electrical noise in the acquired samples. Thus, to improve the accuracy of the collected data, two preliminary activities will be described: the calibration of the sensors to remove the offset values and the optimization using a low-cost computational filter that will be implemented in the wearable system. Calibration Although the IMU sensors are already calibrated by their manufacturers, over time, it is possible to record measurements that are completely different from zero when the sensor is static, as reported by Woodman et al. [22]. To align the measurements to zero, the following steps were performed: 1. Position the wearable modules on a straight surface; 2. Turn on the modules and wait for 1 min; 3. Send calibration command from the mobile app; 4. The Wearable modules start acquiring IMU data at the maximum executable frequency for 10 s; 5. The average of the acquired values is calculated for each axis of the accelerometer and gyrocospe; 6. The resulting values are saved in the internal memory of the wearable system. When the wearable system is powered on again, the stored values, known as offset values in our system, will be accessed. Thus, for each sample acquired, the offset value of the respective axis will be subtracted, resulting in an approximate measurement of zero when the module is static. In step two, the modules remain on for one minute before calibration to stabilize the modules and to ensure that the temperature sensor included in the MPU-6050 model does not interfere in the readings. Filters with Algorithmic Complexity O(1) Within the Signal Processing area, several filters have been studied to improve the accuracy of IMU sensors. As commented in the literature review, implementing filters such as Kalman and Madgwick can considerably reduce errors in measuring joint angles. However, as described by Valade [23], implementing filters such as Kalman's in embedded systems requires a very high computational cost. The Big O notation is one of the most used notations to describe the computational cost of a given algorithm. This notation takes into account the size of the input and counts the number of instructions used to execute a given sequence of code. For example, for an algorithm that calculates whether the given input is even or odd, only one instruction will be used, resulting in an algorithmic complexity of O(1). For an algorithm that needs to traverse a vector of size n, the algorithmic complexity is O(n), since at least n instructions will be executed to complete the task [24]. The algorithm complexity presented by Valvede [23] for the Kalman filter is O(10n 3 ); for the extended Kalman filter, it is O(4n 3 ). A study on the algorithmic complexity of the Madgwick filter was not found, but as calculations with matrices were used, the algorithmic complexity was to be at least O(n). Furthermore, these algorithms require memory resources to store the intermediate matrices needed in every calculation. Therefore, three filters with the lowest computational cost with algorithmic complexity of O(1) will be tested: Simple Moving Average (SMA), Exponentially Moving Average (EMA), and Complementary Filter of the accelerometer and gyroscope (CF). The algorithm that presents the best results will be implemented in the wearable's embedded system to run at the time of collection and transmit only the values of the angles to the mobile app. Moving average filters have the ability to smoothen out the oscillations presented in a signal. In this article, the signal is preset by a sequence of numbers ordered by time, also known as a time-series array. To calculate SMA, it is first necessary to define the only required parameter, the window size (w) to be moved along the vector. Thus, for each sample acquired, the simple average is calculated among all the last w elements of the array with the new sample included. It is possible to apply the SMA filter by traversing a vector between the defined window to find the new average, resulting in an algorithmic complexity of O(w). However, to optimize the calculation, a variable was allocated to store the sum of the momentary window. Thus, for each sample it will be necessary to update the sum value and divide by the parameter w to calculate the new SMA. Algorithm 1 displays the instructions executed to calculate the SMA. The calculation of the conventional average assumes that all elements have the same weight. The weighted average is the calculation of the average by assigning non-equivalent weights to the elements of the defined window. The EMA filter smoothens the signal by calculating the weighted average considering exponential factor (α). The factor α is expressed between 0 and 1 and represents how much the oldest samples should contribute to the result. The EMA filter algorithm can be implemented recursively to optimize processing [25], requiring only one instruction to perform the calculation, described in Algorithm 2. Similar to EMA, the only adjustable parameter to calculate the CF is factor α. However, in this case, the factor α refers to the share of participation between the accelerometer and the gyroscope to compose the angle measured, hence the complementary in the filter name. Complementary filters can be applied whenever there are two or more sources of the same information. For example, Almeida [15] used the angle calculated through the magnetometer sensor to compose the CF as a third source. From the signal processing point of view, the data acquired by the accelerometer can better measure the slower and static movements, so a low-pass filter is applied. The data acquired by the gyroscope can better measure faster and more dynamic movements, so a high-pass filter is applied [26]. The filters are combined through the factor α, resulting in an integrated signal as illustrated in Figure 7. As shown in Figure 7, the two angles (Accelº and Gyroº) are calculated differently before being combined to improve the angle measurement (Rollº). For the accelerometer route, no additional information is required, it is only necessary to convert the accelerometer acquired data to an angle; the equation is described in Section 4. On the other hand, the gyroscope data require an extra step; the value acquired is multiplied by the time interval between the current and past sample, resulting in the angular shifting described as ∆ Gyro. Afterwards, ∆ Gyro is added to the angle found in the last sample collected, representing the past angle being corrected considering the last movement measured. The CF algorithm can also be optimized and implemented recursively, as demonstrated by Albaghdadi el al. [27]. This algorithm requires only two instruction lines to run, as described in Algorithm 3. Unlike the other two algorithms presented, CF requires the storage of one more variable, the ∆ time between samples, represented by (dt). Test Scenario To investigate which optimization filter could give the best performance, the UR3 robotic arm was used. The UR3 collaborative industrial robot from Universal Robotics is suitable for assembly and screwdriver activities, usually positioned on the top of benches [28]. This robotic arm was chosen for two main reasons: firstly, the availability of the robot in the laboratory where the research was carried out; secondly, due to the fact that the UR3 has a certificate validating the precision of the joint's movements. Thus, it is possible to configure the UR3 to perform a given movement with a minimum error, making the validation more cohesive. Each wearable module was attached with clothing elastics to different parts in one of the joints of the UR3 robot, representing the knee joint, as illustrated in Figure 8. To compare the movement performed by the UR3 with the wearable system, a python application was developed to communicate with the UR3 via Wi-Fi and collect the angle of the chosen joint. However, the UR3 updates the register that stores the joint angle at a frequency of 30 Hz, and the sampling frequency of the developed software was set to 100 Hz to approximate the amount of data generated by the wearable system. Figure 9 displays the data collected from the UR3 robot performing the programmed movement three times. To analyze the data acquired by the wearable system, a treatment session without stimulation was recorded for each test performed. In this way, the acquired data became available in a MongoDB (MongoDB available on the website: https://www.mongodb.com accessed on 10 September 2022) database, where it can be imported into a Jupyter Notebook (Jupyter Notebook available on the website: https://jupyter.org accessed on 10 September 2022) for analysis before implementing the feature of knee angle recognition in the mobile app. Finally, two time-series arrays were generated for each test, containing the angles registered by the UR3 and the wearable system. Three main metrics were used to evaluate the performance of the optimization filters: Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE). All the metrics seek to parameterize the difference between the two vectors acquired. To better describe the evaluation metrics, the UR3 vector will be interpreted as the vector of actual points A. After applying one of the optimization filters, the wearable system vector will be interpreted as F. If all elements of vector A are subtracted from vector F, the result is an error vector E, in which the sum is expected to be 0 if there is no error. The MAE metric is the simple calculation of the average of errors, however, disregarding the error signals. Thus, the wearable system angles that were above or below the correct angle have the same weight in the calculation. Equation (2) presents the calculation of the MAE metric: The second evaluation metric is the RMS, which calculates the average of squared errors. Thus, the errors are potentiated in comparison to the MAE metric, making the differences between the errors more significant. Equation (3) describes the calculation of the RMS metric: Lastly, the RMSE is the calculation of the standard deviation for the error vector E. This metric highlights the largest errors (outliers), significantly increasing the value of the metric. It is possible to calculate the RMSE from the square root of the MSE metric, as described in Equation (4): Experimental Results From the defined scenario, the systems were turned on, and data were acquired for 100 s. The same test was performed several times to verify the consistency of the acquired data, and no significant differences were found. The results presented in this session are a clipping of the movement performed by the systems three times during one of the acquisitions. Although the two systems were set up with the same sampling frequency, it was impossible to enable data acquisition from both systems simultaneously, making it necessary to synchronize the data manually. Figure 10 shows the raw data of the wearable system synchronized with the data acquired by the UR3 robot. As can be seen in Figure 10, both sensors, gyroscope, and accelerometer can reproduce the movement performed by the UR3, but they are inaccurate. As the accelerometer can better measure slow movements, the error is small during continuous movements, as shown in Figure 10 in the range of motion from 0°to 90°. However, when the UR3 stops, the accelerometer takes time to stabilize, generating noise peaks. Contrarily, the gyroscope can better measure sudden speed changes, presenting less noise. However, in continuous movements, a small error is accumulated in every sample, resulting in a significant difference compared to the desired degree at the end of the movement. Comparing the two sensors using the evaluation metrics, the gyroscope has a considerably worse result than the accelerometer. For the MAE metric, the accelerometer has a value of 1.27 and the gyroscope 5.37, four times higher. For the RMSE, the accelerometer has 2.09 and the gyroscope 6.18, three times higher. Lastly, the MSE of the accelerometer is 4.38, and for the gyroscope it is 38.27, eight times higher. Due to this significant difference in precision, the SMA and EMA filters were applied only to the accelerometer data since the data cannot be combined in these two filters, and the result with gyroscope data would be worse. The three chosen filters have only one parameter to be adjusted to extract the maximum efficiency from each filter. In order to find the optimal point of these parameters, the range of interest that each parameter can vary was first chosen. Being [1,15], steps of 1, for the window w of the SMA; [0, 0.5], steps of 0.01, for the factor α of the EMA; [0, 1], steps of 0.01, for the factor α of the CF. After, the filters were applied to the wearable system's raw data, varying between the possibilities of the defined ranges. Each time a filter was applied, the evaluation metrics were calculated, resulting in the comparative chart in Figure 11. The red vertical lines in Figure 11 show the optimal points for each parameter defined by the lowest RMSE and MSE values. With this, it is possible to find out which of the selected filters had the smallest error in measuring the movement of the UR3 robot. Table 1 displays the calculated values of the evaluation metrics for the wearable system's raw data and the three filters applied SMA, EMA, and CF. As seen in Table 1, the three filters applied improved the wearable system's raw data error in more than 42% of the RMSE and 67% of the MSE. The MAE metrics of all filters were below 1 degree. The CF filter obtained the best result among the filters, but with a slight difference of 4% over the EMA filter and only 2% over the SMA in RMSE and MSE. Despite this, the same test was performed more than once, and the CF filter always obtained better results than the other filters. Because of this, the CF filter was selected to be implemented in the wearable embedded system. Thus, the mobile app will only receive the angles already processed, reducing the amount of information needed to be transmitted and optimizing the BLE packages. Figure 12 shows the movement recognized by the wearable system with the CF filter already implemented. Analyzing the new data, it was noticed that the performance of the CF filter was even better, resulting in 0.6 degrees on MAE, 1.0058 on MSE, meaning a 77% improvement, and 1.0029 on RMSE, meaning a 52% improvement. It was possible due to the improved resolution of the worked data. As previously, the raw data were sent to the mobile app; the maximum that the mantissa could reach was 255 since only one byte was reserved. Once the calculation is performed within the embedded system, it is possible to declare variables with greater precision to perform the calculation, such as the double type, representing a mantissa of up to 15 decimal places. Mobile App Implementation The myHealth app was developed by Franco et al. [8] for patients to manage their electrostimulation treatment sessions at home. For this, the app has a communication protocol with the wearable system capable of simultaneously sending the stimulation rules and collecting the EMG sensor data during a session. The protocol is flexible to adjust the stimulation parameters during the session, enabling the app to consider the patient's immediate response through the sensors. The mobile app has three main screens for interacting with the patient, the first being the home screen on which the history of treatment sessions and upcoming sessions are displayed. When the patient selects a session to perform, the monitoring treatment screen will be displayed. In this interface, it is possible to follow the treatment progress, command the treatment with pause and stop and see a real-time chart of the EMG signal. After the session ends, a feedback form screen displays some questions about the patient's physical status. All information generated during a treatment session is made available to the physician responsible for the care plan. With this, the physician will be better prepared to design the next treatment session, paying attention to the last session's performance and the treatment's progress. Knee angle recognition will be one of the essential attributes within this context. In addition to being sent to the physician, it will also be helpful to guide the patient to follow the treatment instructions and confirm that the movement is consistent with the desired one. As already described in Section 3, the app was refactored to receive the IMU sensor data and transport them to the cloud along with the EMG sensor data. With the optimization filter implemented, the wearable system sends only the angles in a packet with approximately 40 samples every 400 ms, resulting in a sampling frequency of 100 Hz. In order to have an immediate graphical response to the recognized movement, the session monitoring screen has been refactored by adding a new tab called IMU. As can be seen in Figure 13, in this new interface, it is possible to visualize the thigh angle, the shin, and the resulting angle of the knee. A button was created to send the command calibration to the wearable system to facilitate the tests. Lastly, a set of rectangles was designed to represent the patient's thigh, shin, and foot members. Thus, when the mobile app receives the data packet from the wearable system, the interface updates the rotation of the rectangles representing the movement identified. With the implementations, the wearable modules were put back in the UR3 robot, and the UR3 motion sequence was executed again. This test verified that the rectangles representing the patient's leg could follow the movement performed by the UR3. The performance remained the same since there were no more changes in the algorithms, only in the graphical part of the app. Test on Volunteers The wearable module systems were applied to a real environment to validate the developed content. Five volunteers were recruited to perform a test performing the knee extension movement. In this test, each volunteer remained seated on a bench and was requested to lift their leg five times for one minute, keeping their leg raised for approximately 4 s each time in the maximum muscle contraction. Figure 14 shows the volunteer's knee angle being recognized by the MyHealth mobile app through the IMU sensors of the wearable system developed. As planned, the app successfully recognized the volunteer's knee angle. However, it was noted that one more adjustment would need to be added to the systems. As can be seen in Figure 14, the wearable module positioned on the volunteer's thigh does not remain precisely parallel to the thigh bone. Thus, it generates an angular deviation that varies between each person due to the unique anatomy of the leg muscle. Fortunately, this angular deviation tends to be constant and the same during all treatment sessions, varying only by person. Thus, this value can be added and validated by the physician during the first treatment sessions, which are expected to be performed in the clinic and accompanied by the professional. Finally, an example of the information that will be delivered in the administrative portal to the physician after each treatment session can be seen in Figure 15. In this case, the chart shows the volunteer's knee angle movement during the test performed. For better understanding, the scale was inverted, with 90º representing the thigh and shin aligned perpendicularly and 0º when the leg is fully extended. Thus, it becomes more evident after the five times that the volunteer raised his leg, remained elevated, and lowered it repeatedly for approximately 10 s. Conclusions This article presents the design, the necessary hardware, and the implementation of a wearable motion acquisition system. This acquisition system is composed of two main components, an ESP32 microcontroller and an MPU-6050 sensor (IMU) and its development is low cost. Due to this, this hardware can present limitations in some environments, as in the case reproduced in this article. The system implemented in the wearable's hardware was designed to share its processing time to acquire data from an EMG sensor, apply stimulation, and send data to the mobile app. With this in mind, the wearable system was developed with the primary requirement to consume the minimum processing power to collect the data from the IMU sensor. The focus of the system implemented in the wearable is to provide the data necessary for a mobile app to recognize the leg's movement through the knee angle. According to this objective, three filters were explored to reduce the noise of the IMU sensor data, namely, SMA, EMA, and CF. The CF filter produced the best values among the evaluation metrics, achieving an improvement of 77% in the MSE and 52% in the RMSE in relation to the raw data. The wearable acquisition system showed an absolute average error of 0.6 degrees in recognizing the movement performed by the UR3 robot arm. The recognition of the knee angle will be an asset for the myHealth app, as its objective is to enable performing a treatment with electrostimulation at the patient's home. For this purpose, an interface was developed to show a sketch of the patient's leg with its components. As soon as myHealth receives new data from the wearable system, the leg representation is recalculated according to the recognized movement, and a fresh sketch is redrawn, exhibiting the patient's response to the stimuli. We plan to refactor this interface in the future to show instructions interactively during a treatment session. This way, the treatment may be more attractive to elderly audiences. In addition, all movement performed by the patient during treatment is saved and made available for the physician to analyze later. The system was tested with volunteers in a real environment and successfully measured the movement performed. Due to the heterogeneous anatomy of the volunteers, it was realized that incorporating an initial parameter would be needed. This parameter refers to angular deviation in a static position. This information should be measured and validated by the physician during the first treatment sessions, which are expected to occur in the physical therapy clinic. In addition, the presented system has some limitations and restrictions to operating as planned. The system can measure the angle of only one knee joint at a time, so to measure both knees, it would be necessary to duplicate the IMU sensors and refactor the communication between the wearable system and the mobile app. Although the two modules can measure the angle regardless of the surface placed, the calculations performed in the mobile app assume that the modules are in the correct order and positioned on the same axis of the body, with the sagittal or frontal axis being possible. In this way, if the modules have been switched or twisted, the mobile app will misinterpret the measured movement. In the near future, it is expected to embed the wearable modules presented in this article on a Printed Circuit Board (PCB). The PCB designed for the NanoStim project includes an EMG sensor and an electrostimulation actuator. With this in mind, the IMU sensors will be incorporated into a wearable, such as pants or shorts, and, through two cables connected to the PCB, responsible for power supply and data communication. Thus, it is expected to solve the positioning constraint, as the sensors will be positioned and fixed on the wearable in the correct and unalterable order. With the complete hardware, it will be possible to test all the components designed to carry out a treatment session simultaneously and synchronized. As an example of a more complex validation, it can be the use of the Ariel Performance Analysis System (APAS), which uses cameras to measure the movement of the joints, providing a more reliable benchmark for the presented system. Furthermore, we plan to create algorithms that can benefit from the combination of IMU and EMG sensors. An example is implementing a contraction clipping algorithm, using the knee angle to indicate the initial and final moment that the EMG sensor data should be clipped. With the EMG sensor alone, identifying contraction can be tricky since injured or diseased muscles can present low activity in the EMG signal.
2022-10-12T16:25:49.080Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "f73d8cb936306b998f56624e354c4a19b7efc242", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/19/7605/pdf?version=1665370426", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71cadcf4af388f69f3c7183aa8479819971f7972", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
262214115
pes2o/s2orc
v3-fos-license
Biophysiology of in ovo administered bioactive substances to improve gastrointestinal tract development, mucosal immunity, and microbiota in broiler chicks Early embryonic exogenous feeding of bioactive substances is a topic of interest in poultry production, potentially improving gastrointestinal tract (GIT) development, stimulating immunization, and maximizing the protection capability of newly hatched chicks. However, the biophysiological actions and effects of in ovo administered bioactive substances are inconsistent or not fully understood. Thus, this paper summarizes the functional effects of bioactive substances and their interaction merits to augment GIT development, the immune system, and microbial homeostasis in newly hatched chicks. Prebiotics, probiotics, and synbiotics are potential bioactive substances that have been administered in embryonic eggs. Their biological effects are enhanced by a variety of mechanisms, including the production of antimicrobial peptides and antibiotic responses, regulation of T lymphocyte numbers and immune-related genes in either up- or downregulation fashion, and enhancement of macrophage phagocytic capacity. These actions occur directly through the interaction with immune cell receptors, stimulation of endocytosis, and phagocytosis. The underlying mechanisms of bioactive substance activity are multifaceted, enhancing GIT development, and improving both the innate and adaptive immune systems. Thus summarizing these modes of action of prebiotics, probiotics and synbiotics can result in more informed decisions and also provides baseline for further research. INTRODUCTION In commercial broilers, more than 50% of the productive lifespan of chickens is determined by the conditions of incubation and neonatal periods (Ferket, 2012;Patricia et al., 2020;Kouassi and Monika, 2023).The time period from the 18th day of incubation (DOI) to 4-days posthatch is considered the critical period for rapid intestinal development (Dibner et al., 1996;Iji et al., 2001a), and the survival and growth of chicks (Ferket, 2006).During this critical period, chicks also undergo both metabolic and physiological shifts from endogenous nutrients to exogenous feed utilization (Iji et al., 2001a;De Oliveira et al., 2008;Ferket, 2012;Patricia et al., 2020).This transition enhances high energy and nutrient demand, potentially, leading to an imbalance of nutrients or malnutrition (Kadam et al., 2013;Ghanaatparast-Rashti et al., 2018;Patricia et al., 2020); and limited development of embryos and posthatch growth performance (Ohta et al., 1999), which can hinders the development and maturation of the intestine of chicks (Geyra et al., 2001a;Gao et al., 2017a). In addition, the immune system of neonatal chick is also immature and inefficient.Although many authors have indicated that most of the development of the immune system is complete at the late embryonic phase (Bar-Shira and Friedman, 2006;Reemers et al., 2010;Eren et al., 2016;Song et al., 2021), the maturation and response of the immune system increase with posthatch age until 30 to 34 d (Song et al., 2021).Thus, chicks are highly vulnerable to environmental threats during the first weeks of posthatching (Farnell et al., 2006;Pender et al., 2017). 1 For these above aforementioned reasons and banning of antibiotic growth promoters (AGP), in ovo feeding of bioactive substances has been studied regard to its impact on GIT function and the micorbiome profile (De Cesare et al., 2019), it has been found that to correct nutrient imbalance (Foye et al., 2006;Nasir and Peebles, 2018), improve growth rate and feed conversion efficiency, and enhance weight (Bogucka et al., 2017;Stefaniak et al., 2019;Reicher et al., 2022), development and maturation of the immune system (Murate et al., 2015;Pender et al., 2017;Stefaniak et al., 2019;Qamar et al., 2020), and reduce the rate and severity of enteric infections (Dibner et al., 1998;Song et al., 2021).In ovo administration of bioactive substances has also been shown to improve intestinal protection, antioxidant capacity and apoptosis (Bai et al., 2013;Broom and Kogut, 2018;Wu et al., 2019). In this context, the in ovo feeding technique has shown promising and has become popular to ensure access to bioactive substances and nutrients at the embryonic development stage of chicks (BogusLAwska-Tryk et al., 2012;Cox and Dalloul, 2015;Roto et al., 2016;Gao et al., 2017b;Ghanaatparast-Rashti et al., 2018;Nasir and Peebles, 2018;Siwek et al., 2018;Das et al., 2021).The method was created in the 1980s to administer the Marek's disease vaccine, and resulted chicks with significantly better immunity, performance, and gut health (Sharma and Burmester, 1982;Siwek et al., 2018).The earliest in ovo bioactive substance demonstration attempts have shown promising results, with long lived biological effects (Siwek et al., 2018;Patricia et al., 2020;Dunislawska et al., 2021). Negative results in terms of hatchability, performance, and mortality have also been noted.These discrepancies might be due to the bioactivity mechanism and their effectiveness may be closely linked with the structure and composition of the used bioactive substance (Wassie et al., 2021).Information on the modulator action of bioactive substances on intestinal development, the immune system, and the GIT micorbiomes is inconsistent and therefore requires further insight.Thus, the intention of this review is to summarize the modulation mechanism of prebiotics, probiotics, and synbiotics on the morphology of the GIT, mucosal immunity, microbiota, and pathogen combating capability in broiler chicks. INTESTINAL DEVELOPMENT AND FUNCTION The successful development of numerous intricate and highly specialized sections of GIT is depends on the availability of nutrients in the egg during the embryonic and posthatching stages of chicks (Sobolewska et al., 2017b;Kouassi and Monika, 2023).However, endogenous nutrients alone may not be sufficient to sustain the late stage of embryonic development and the hatching process.Numerous previous studies have confirmed that the in ovo delivery of prebiotics, probiotics, and synbiotics have ameliorative modulation effects on GIT development in chicks (Bogucka et al., 2017;Kouassi and Monika, 2023) (Table 1).For instance, Mista et al. (2017) demonstrated that in ovo injection of L. lactis subsp.lactis IBB (1,000 CFU) and insulin (1.76 mg/ embryo) combination (synbiotics) at 12 DOI improved intestinal morphology, cecal SCFA profile, and the growth of broiler chickens.Consistently, in ovo feeding (IOF) of different amino acids (AAs) at the late embryonic phase has been shown remarkable to improve the morphological and functional development of the intestinal mucosa (Al-Murrani, 1982;Gao et al., 2017b;Nazem et al., 2017).Likewise, the weight of the small intestine at 4 and 21 DOI was linearly changed with the inclusion of sodium butyrate during posthatch feeding (Lan et al., 2020), and 10% degraded date pits supplementation also changed the intestine of broilers (Alyileili et al., 2020). Immediately after hatching, the proportional growth of the small intestine is greater than the body weight (BW) of chicks; peaks within 6 to 10 d of hatching (DOH) (Katanbaf et al., 1988;Sklan, 2001) and is completely formed by 12 DOH (Alcantara et al., 2013): this might be due to the accelerated processes of enterocyte proliferation and differentiation (Geyra et al., 2001a).In a previous study, the relative lengths of the duodenum (21 DOH), jejunum (14 and 21 DOH), ileum (14 and 21 DOH), and ceca (21 DOH) were linearly changed with the inclusion of sodium butyrate during posthatch feeding (Lan et al., 2020).Another study speculated that the relative weight of the duodenum peaked at 3 DOH, while there was a subsequent decline in relative intestinal growth through 21 DOH of heavy breed chickens (Dror et al., 1977).Similarly, duodenum villus area was greater in broiler vs. layer embryos at 14 DOI and continued through 7 DOH (Uni et al., 1995a,b).These authors also illustrated that the duodenum was the highest among other intestinal segments with greater enterocytes per villus in broiler embryo/chick, which increased with age.It can be concluded that the available bioactive substances, breed, and age of chicks affect GIT compartment development (BogusLAwska-Tryk et al., 2012). Villus height (VH), crypt depth (CD), and the ratio between villus height and crypt depth (VH/CD) are the 3 most significant criteria that determine the developmental and functional state of the broiler GIT (Fan et al., 1997;Yamauchi, 2002;Xu et al., 2003;Bogus-LAwska-Tryk et al., 2012;Hassan et al., 2018;Reicher et al., 2022).The intestinal epithelium that covers the villi are invaginates into the lamina propria, forming tubular glands called intestinal crypts (Sobolewska et al., 2017b).In ovo feeding of methionine increase the VH, width, area, and height of enterocytes, which play a key role in nutrient absorption from the intestinal lumen into blood vessels (Potten and Loeffler, 1987), and goblet cell density (Nazem et al., 2017), arginine (Arg) increases the VH and lowers the CD in the duodenum (Gao et al., 2017c), and a 10% degraded date pits diet increases the VH and VH/CD, and lowers the CD of the broiler intestine (Alyileili et al., 2020).An increase in any of these morphometric parameters is anticipated to improve hydrolysis, immune system or barrier function and nutrient absorption (Awad et al., 2008;Salvi and Cowles, 2021), and the capabilities of the brush border membrane (Yamauchi, 2002).Deeper CD stimulates the secretion of digestive enzymes (Xu et al., 2003), and the formation of intestinal epithelial cells since it comprises of populations of continuously proliferating stem cells (Potten and Loeffler, 1987). INTESTINAL MUCOSAL IMMUNITY Intestinal mucosal immunity is the first barrier against pathogens (Muller et al., 2005).This immunity is achieved by highly efficient mucosal barrier and specialized multifaceted immune system; made up of a large population of scattered immune cells and the gut-associated lymphoid tissue (GALT) (Ahluwalia et al., 2017).The intestinal mucosa has more than 70% of immune cells (B cells, T cells, and macrophages), responsible for maintaining and control the GIT health of chickens (Muller et al., 2005).A well-developed GIT maintains immune homeostasis in chickens (Sobolewska et al., 2011;Bogucka et al., 2017;Sobolewska et al., 2017a). The mucosal immune defense of the gut can be divided into 3 different anatomical parts; the intestinal epithelial barrier, the lamina propria, and the GALT.The GALT is the largest lymphatic organ of the body and is composed of 3 different entities of organized lymphoid tissues namely; Peyer's patches (PP), isolated lymphoid follicles (ILF), and mesenteric lymph nodes (MLN) (Mason et al., 2008).Mucosal immune defense can also be divided into inductive and effector sites.The inductive sites, where antigens sampled from mucosal surfaces activate naive and memory T and B lymphocytes consist of organized nodes of lymphoid follicles and include PP, ILF, and MLN (Brandtzaeg et al., 2008;Mason et al., 2008).The effector sites, where the effector cells after extravasation, retention, and differentiation perform their action, consist of the epithelium and the lamina propria where the lymphocytes are scattered throughout the tissue (Mowat, 2003;Brandtzaeg et al., 2008;Mason et al., 2008) (Figure 1). Immune Cell Receptors The host detects pathogen-associated molecular patterns (PAMPs) using innate immune sensors known as pattern-recognition receptors (PRRs), which mediate antimicrobial responses.PRRs are expressed by dendritic cells (DCs) and other phagocytic cells of the immune system and enable detection of microbes (Trinchieri and Sher, 2007;Peterson and Artis, 2014).Furthermore, PRRs are expressed by IECs on their surface as well as within their cytoplasm (Backhed and Hornef, 2003;Hurley and McCormick, 2004;Yuan et al., 2004;Trinchieri and Sher, 2007).PRRs are not only limited to TLRs, but can also bind to microbial compounds of both pathogenic and commensal bacteria.However, the differentiation mechanisms of PRRs in pathogenic and commensal organisms have not yet been fully elucidated (Mowat and Viney, 1997;Peterson and Artis, 2014). The activation of TLRs is promoted by different bioactive substances in chickens (Sato et al., 2009;Terada et al., 2020;Rehman et al., 2021).Lactobacillus reuteri (LR) and Clostridium butyricum (CB) affect the innate immune system in broilers by modulating the expression of TLRs (Terada et al., 2020).Ying et al. (2022) reported that the mRNA expression of TLR1A, TLR1B, and TLR2A was significantly downregulated in the ileum through dietary quercetin supplementation of Arbor Acre (AA) in broilers.These authors also reported that quercetin supplementation significantly downregulated the mRNA expression of MyD88, TIRAP/MAL, TBK1, IKK, NF-B, and IRF7 (Ying et al., 2022).Likewise, TLR2 and TLR4 mRNA expression was significantly higher after treatment with mannan oligosaccharides (Cheled-Shoval et al., 2011).In contrary, another study found that no significant difference was observed in the expression levels of TLR4, following raffinose injection in broilers (Berrocoso et al., 2017). Activation of Lymphocytes and Phagocytosis The intestinal lamina propria contains abundant B lymphocytes, especially IgA+ cells (Yang et al., 2021).These IgA+ cells form an important mucosal protective layer on the surface of the intestinal mucosa and play an important role in protecting the intestinal tract from pathogenic infection.In ovo injection of 1 and 2 mg/egg Astragalus polysaccharide (APS) at 18 DOI increased the IgA+ cells and improved sIgA content in the intestinal mucosa (Yang et al., 2021).Probiotics also influence humoral and cell-mediated immune responses by upregulating T lymphocyte numbers and associated responses (Brisbin et al., 2010;Lee et al., 2010).Similarly, the in ovo delivery of synbiotics has a modulation impact on the posthatching development of GALT, high colonization of GALT by T cells in the cecum, and enhanced B-cell proliferation in peripheral lymphatic organs (Siwek et al., 2018).Madej and Bednarczyk (2016) showed that in ovo feeding of prebiotics and synbiotics (inulin, transgalactooligosaccharides, Lactococcus lactis subsp.lactis IBB SL1 or Lactococcus lactis subsp.cremoris IBB SC1) impacted the composition of T cells and B cells in GALT: this increased diffuse lymphohistiocytic infiltration and solitary lymphoid follicles in the mucosa indicated an increased immunological response (Junaid et al., 2018).Consistently, dietary supplementation with Lactobacillus-based probiotics modulated the intraepithelial lymphocyte population that expresses the surface marker cluster of differentiation 4 (CD4) for resulting in induced intestinal immunity against coccidiosis (Zulkifli et al., 2000;Dalloul et al., 2003). Augmenting the phagocytic capacity of macrophages, heterophil oxidative bursts, and degranulation are mechanisms action of bioactive compounds to enhance the innate immune system of chickens (Farnell et al., 2006;Higgins et al., 2007;Stringfellow et al., 2011;Pan and Yu, 2014).For instance, in ovo injection of prebiotics (0.76 mg/egg inulin +0.528 mg/egg Bi 2 tos) and synbiotics (0.76 mg/egg inulin +0.528 mg/egg Bi 2 tos + L. lactis subsp.cremoris IBB with 3 £ 10 8 living cells) on the 12th DOI enhanced a transient increase in the rate of Phag+ cells at 21 DOH (Stefaniak et al., 2019).Consistently, Higgins et al. (2007) demonstrated that administering a multistrain Lactobacillus probiotic (3 Lactobacillus bulgaricus, 3 Lactobacillus fermentum, 2 Lactobacillus casei, 2 Lactobacillus cellobiosus, and 1 Lactobacillus helveticus) in Salmonella enteritis exposed broilers reduced the number of macrophages in the ileum and ceca.The macrophage count reduction in infected birds attributed to a decrease in the bacterial load due to competitive exclusion mechanisms. In Ovo Administered Bioactive Substances for the Production of Cytokines and Chemokines Cytokine secretion by immune cells has been reported to stimulate GC proliferation and mucus production.For example, the secretion of IFN-g by the activation of the Th1 pathway, IL-13 by dendritic cells and macrophages, and IL-4, IL-5, IL-9, and IL-13 by T helper 2 cells have been used to stimulate GC proliferation and mucus production (Birchenough et al., 2015).The mucins that are primary secreted by goblet cells (GCs) are used to create a protective mucus layer as shown in Figure 2. Other GC proteins including IgA, lysozyme, and avidin also play major roles in the innate immunity of chickens (Bar Shira and Friedman, 2018).Regulated secretion of mucus is a rapid response to external stimuli as the first defensive mechanism of the gut.Glycosylation of O-glycan regulates the distribution of mucin types in GCs, which can be affected by both host and external factors, including pre/ probiotic nutrients in the diet, inflammatory markers, hormones and neurotransmitters, and commensal and pathogenic bacteria (Duangnumsawang et al., 2021).The expression of immune-related genes can be either up-or downregulated. The population and structure of microbial communities are dynamic and could be affected by age (especially at the early stages of life), sex, diet or feed additives, phytobiotics, bacteriophages, and noninfectious and/ infectious stressors (Clavijo and Florez, 2018;Diaz Carrasco et al., 2019).Furthermore, microbial community varied among intestinal segments (from crop to cloaca) and sampling sites (mucosal vs. luminal content).For instance, crop vs. ceca (the most predominant niche) had 10 3 to 10 4 CFU/g; Lactobacilli and Streptococci and 10 11 to 10 12 CFU/g; Ruminococci, Bacteroides, Clostridia, Streptococci, Enterococci, Lactobacilli, and E. coli, respectively (Yadav and Jha, 2019).In addition, the GIT microbiota reaches a mature state in between wk 2 and 3 posthatching (Huang et al., 2018); this is very late compared with the exposure time of environmental infectious threats staring on the first day of hatching.Consequently, this dysbiosis can disrupt the intestinal morphology and activities of chickens (Shang et al., 2018).Thus, maintaining and controlling natural microbial homeostasis in the GIT is crucial.The intestinal microbiota was modulated through in ovo feeding of embryos.In ovo administration of wheat based prebiotic at17 DOI increased the intestinal Lactobacilli and Bifidobacteria populations (Tako et al., 2014).Another study demonstrated that in ovo injection of L. plantarum IBB3036 + lupin RFO-10 5 CFU + 2 mg prebiotic, and L. salivarius IBB3154 + Bi 2 tos-10 4 CFU probiotic + 2 mg at air chamber at18 DOI modulated GIT microbiota due to its adherence ability (Aleksandrzak-Piekarczyk et al., 2019).Furthermore, in ovo injection of L. acidophilus at a dose of 1 £ 10 6 at the amnion on 18 DOI significantly increased the concentration of probiotic bacteria Lactobacillus spp.and lowered the concentration of harmful microbes in the jejunal contents of broilers (Kanagaraju et al., 2019).Likewise, dietary MOS (1 g/kg) increases the Lactobacillus and Bifidobacterium content of the chicks intestine (Baurhoo et al., 2007), injection of 2 Lactobacillus strains, 1 Bifidobacterium strain, 1 Enterococcus strain, and 1 Pediococcus strain (a multibacterial species probiotic) modulates the composition and activities of the cecal microflora of broilers (Mountzouris et al., 2007), and Enteromorpha polysaccharide (EP) regulates the intestinal microbiota in chickens (Wassie et al., 2021).Therefore, in ovo delivery of bioactive substances has proven similar to dietary supplementation in terms of promoting a healthy microbial balance, and enhancing host defenses against several pathogens at the early stage of chicken development. IN OVO BIOACTIVE SUBSTANCES EFFICACY FOR COMBATING PATHOGEN The gut microbiota is one of the main defense components in the digestive tract against enteric pathogens. The disturbance of the gut microbiota−host interaction plays a crucial role in the development of intestinal disorders.For instance, cecal microbiota have been significantly changed in chickens infected with C. perfringens or Escherichia coli (Feng et al., 2010;Stanley et al., 2012;Skraban et al., 2013), Eimeria species (Perez et al., 2011;Stanley et al., 2014;Wu et al., 2014); and Salmonella Enteritidis (Nordentoft et al., 2011;Juricova et al., 2013;Videnska et al., 2013).With respect to maintaining the health of GIT microbiotas, several investigations of in ovo administered bioactive substances have been indicated (Table 3); these substances exert preventive and protective measures against different infection in chickens through various mechanisms of action.Thus, different bioactive substances could be used as a biological alternatives in combating chicken diseases by maintaining GIT microbial homeostasis as shown in Figure 2 (Holzapfel and Schillinger, 2002;Patterson and Burkholder, 2003;Siragusa and Ricke, 2012). Probiotics have been used for eliminating many economically important poultry diseases and pathogens (Dalloul and Lillehoj, 2005;Knap et al., 2010;Pender et al., 2016).Many strains of probiotic bacteria such as Enterococci, Bacilli, lactic acid bacteria, and yeast have used as anti-C.perfringens activities (Rajput et al., 2020).However, Yamawaki et al. (2013) andDe Oliveira et al. (2014) demonstrated that probiotic (Lactobacillus spp.) injection into air cells or into the amniotic fluid on the 18th DOI did not protect against Salmonella Enteritidis challenge in cecum at 2 to 3 d of posthatch chicks.These inconsistent results may be attributed to the probiotic strain used, volume, delivery site, genotype, injection procedure, and hygiene practice variation.Although probiotics have different mechanisms of action, the mediation of mucosal immunity against This table shows the probiotics species, dose level, and injection days used for the prevention or alleviation of enteric infections through in ovo techniques; they have different mechanisms including competitive exclusion, production of inhibitory substances, immune system modulation, and improved barrier function).CFU, colony-forming units; AFB1, aflatoxin B1; DOI, days of incubation. infection is mainly mediated by the action of immunoglobulin A (sIgA), which can block the connection between pathogens and the epithelium; and cause bacterial agglutination (Mantis et al., 2011).Thus, intestinal sIgA levels can increase with supplementation of live yeast S. cerevisiae or S. boulardii in broiler chickens (Gao et al., 2009;Rajput et al., 2013).However, Wang et al. (2017) indicated that the supplementation of Kluyveromyces marxianus did not significantly influence the jejunal and ileal sIgA content, which confirmed the different physiological roles of Kluyveromyces marxianus compared with other yeast probiotics. Similarly, in ovo or dietary supplementation of perbiotics (polysaccharides) including yeast beta-glucans enhanced gut health in chickens (Anwar et al., 2017), alleviated aflatoxin B (1)-induced DNA damage in lymphocytes (Zimmermann et al., 2015), and prevented C. perfringens-induced necrotic enteritis (Tian et al., 2016).Likewise, supplementation of mannan-oligosaccharides (MOS) in broilers increased Lactobacillus community diversity and decreased C. perfringens and E. coli in the ileum (Kim et al., 2011).Exploitation of the synergistic effect of different bioactive compounds with different molecular weights and compositions has gained increasing interest.This was evidenced in a study by Jen et al. (2021), where glycan with high molecular weights and polysaccharides composed of glucose (Glc), mannose, and galactose with low molecular weights exhibited synergistic effects on inhibiting proinflammatory mediator production.Similarly, the antiviral effects of combined polysaccharides from Eisenia arborea and Solieria filiformis exhibited the highest antiviral activities compared with the individual effects (Moran-Santibanez et al., 2016).Thus, the summary provided in this review indicates that the bioactivities of polysaccharides are closely linked with their structure and composition. CONCLUSIONS AND IMPLICATIONS In ovo administered prebiotics, probiotics, and synbiotics are helpful for GIT development, immune system function, and microbial homeostasis in chicks.The literature describes 2 major time points of in ovo delivery in chicken embryo development.The first time point is around 12 DOI, which is the main window for the delivery of prebiotics and synbiotics.The second time point is around 17/18 DOI, in which in ovo supplementation can mitigate the negative effects of starvation during the hatching window.However, functional ameliorative effects of bioactive substances have different in types and dosages, modes of action, sites of injection, and can have varying effects during various growing periods of chicks.Many studies have compared the beneficial effects of bioactive compounds at different doses and embryonic times of injection; however, no conclusive recommendation can be made under various confounding factors. The mechanisms of action of bioactive substances are different, including antimicrobial peptide production, competitive exclusion, humoral and cell-mediated immune response alteration, phagocytic capacity, and degranulation.Mainly, these effects are helpful in the context of many economically important disease agents such as Eimeria, Newcastle disease virus, and infectious bursal disease virus.Although the benefits of bioactive substances are evident in numerous studies, further elucidation of their immunoregulatory effects on intestinal immunity under challenging conditions of disease is required.Moreover, further research will provide justification for biological compatibility of bioactive substances and hosts for the means of promoting early GIT development and improving the immune system, establishing beneficial bacteria, and enhancing gut health in chicks. Figure 1 . Figure 1.Immunity system modulation mechanism and antigen entrance routes.Innate immunity constitutes the first line of defense and is mediated by innate immune cells such as tissue macrophages, dendritic cells (DC), and granulocytes which elicit their effector function within minutes to hours following antigen exposure.Innate cells become activated via germ-line encoded pattern-recognition receptors (PRR), including toll-like receptors (TLR) and NOD-like receptors (NLRP) which recognize invariant features of pathogens (pathogen-associate molecular patterns or PAMPS) and tissue damage.Once activated, innate cells such as macrophages and neutrophils can effectively clear antigens via phagocytosis.Other types of innate cells, such as DC take up and process antigens, resulting in expression of antigenic epitopes.These DC can then serve as antigen-presenting cells (APC) for the priming of the adaptive immune system.In this way, the early innate response is coupled, and facilitates adaptive immunity.Antigens can enter through the microfold (M) cells in the follicle-associated epithelium (FAE) and be passed on to the dendritic cells (DCs), and the DCs then present the antigen directly to T-cells in the Peyer's patches or alternatively may reach the mesenteric lymph nodes (MLNs) through the draining lymph and subsequently be presented to naïve T-cells in the MLN.Another route involves direct antigen sampling of the intestinal lumen by DCs which extend dendrites between the epithelial cells to reach the lumen.Additionally, the antigens may gain entrance through the FAE enterocytes which can either pass antigens on to DCs or possibly act as local APCs via expression of MHC class II. 1. Activated antigen-specific clones, 2. Responding T lymphocyte, 3. Regulatory T lymphocyte, 4. Microbial antigen presented by antigen-presenting cell, 5. Infected cell expressing microbial antigen. Figure 2 . Figure 2. Probiotics reduce colonization of pathogens through competitive exclusion and enhance immune response; in ovo provided probiotics increase T lymphocyte numbers and modulate production of several proinflammatory includes T helper Type-1 (Th1)-dependent delayed-type hypersensitivity (DTH), IgG antibodies, T helper Type-2 (Th2) (IgE antibodies), T helper-17 (T17) cytokines; thus regulates the process and the development of regulatory T (Treg) cells in mesenteric lymph nodes; mucosal dendritic cells carry antigens to them and become conditioned for induction of T cells. Table 1 . The effect of in ovo administered bioactive substances on intestinal development of broiler. Table 2 . Effects of in ovo administered bioactive substances on the immune system of broilers. Table 3 . In ovo administration of bioactive substances for broilers against infection.
2023-09-24T15:35:58.837Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "37f5495543f849ed1fc09265275f74b889fdc4ab", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.psj.2023.103130", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cc5deb04480fd5efa2d95b7c0b9bd93cbec0b52", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
252616469
pes2o/s2orc
v3-fos-license
Parental Preferences about Policy Options Regarding Disclosure of Incidental Genetic Findings in Newborn Screening: Using Videos and the Internet to Educate and Obtain Input Our objective was to develop and test a new approach to obtaining parental policy guidance about disclosure of incidental findings of newborn screening for cystic fibrosis (CF), including heterozygote carrier status and the conditions known as CFTR-related metabolic syndrome (CRMS) and/or cystic fibrosis screen positive inconclusive diagnosis, CFSPID. The participants were parents of infants up to 6 months old recruited from maternity hospitals/clinics, parent education classes and stores selling baby products. Data were collected using an anonymous, one-time Internet-based survey. The survey introduced two scenarios using novel, animated videos. Parents were asked to rank three potential disclosure policies—Fully Informed, Parents Decide, and Withholding Information. Regarding disclosure of information about Mild X (analogous to CRMS/CFSPID), 57% of respondents ranked Parents Decide as their top choice, while another 41% ranked the Fully Informed policy first. Similarly, when considering disclosure of information about Disease X (CF) carrier status, 50% and 43% gave top rankings to the Fully Informed and Parents Decide policies, respectively. Less than 8% ranked the Withholding Information policy first in either scenario. Data from value comparisons suggested that parents believed knowing everything was very important even if they became distressed. Likewise, parents preferred autonomy even if they became distressed. However, when there might not be enough time to learn everything, parents showed a slight preference for deferring decision-making. Because most parents strongly preferred the policies of full disclosure or making the decision, rather than the withholding option for NBS results, these results can inform disclosure policies in NBS programs, especially as next-generation sequencing increases incidental findings. Cystic Fibrosis Newborn Screening Cystic fibrosis (CF) newborn screening (NBS) has been performed in the United States for over 30 years [1], and in some European regions such as Veneto, Italy for almost 50 years [2]. The protocols have changed over time, especially during the past decade with nationwide programs underway [3,4]. The original protocols used a first tier of immunoreactive trypsinogen (IRT) analysis followed by a second IRT [5], and later a Next Generation Sequencing in Newborn Screening Next-generation sequencing (NGS) technologies [8,9] are now available and will help to increase NBS sensitivity, i.e., the percentage of CF cases identified. However, NGS also produces more IFs. Thus, the application of NGS may lead to more psychosocial complications. NBS programs are looking for ways to mitigate harm as they increase the benefits through NGS. Thus, the pivotal introduction of NGS with its unprecedented technology has reinvigorated the longstanding debate about whether NBS programs should notify parents about IFs, given that the risk/benefit ratio is uncertain [14,18]. CF carrier status and CRMS/CFSPID are unlike most NBS results as they do not require immediate medical attention, although these conditions are often disclosed with counseling in the neonatal period. However, some programs do not ensure that IFs are disclosed. In fact, at least one country (Norway) by law does not reveal CFTR carrier status discovered through NBS [4]. In the USA, many IF results are returned to the primary care provider who may lack sufficient time, knowledge or counseling skill [19,20], and may not even know the family because of inaccurate or insufficient labeling of dried blood spot specimens [21,22]. Therefore, parents can become anxious or confused about the implications of the results, as has been noted after NBS and other community screening programs [23,24]. Infants with CRMS/CFSPID may also have had biomedical complications of tests or treatments, which might have been unnecessary [11,12]. Since NGS and the increased number of IFs may cause a change in the balance of risks and benefits of NBS, it is important to re-examine policies and responsibilities for reporting results. Policy Options for Disclosure After reviewing the limited literature on this topic, we decided that it would be important to obtain fresh perspectives from new parents about potential policies. We were aware of three potential policy options for communicating IFs, namely Fully Informed, Withhold Information, and Parents Decide (see descriptions in Table 1). We sought to develop a survey instrument to gather parents' policy advice about two research questions: (1) how should NBS programs communicate with parents about single-variant NBS results that are consistent with being a carrier?; (2) how should NBS programs communicate with parents about one or two mutations consistent with a mild version of the screened disease, which has minor health significance compared to the full disease (e.g., CRMS/CFSPID)? Our hypotheses were based on three decades of NBS follow-up experience and especially our recent studies [21,[23][24][25][26], suggesting parents would wish to know about IFs even if the information was complex and potentially stressful and even if the condition was mild. Design The study used an anonymous online survey that contained three animated video clips, each of which explained some background information necessary for understanding the questions. The survey was hosted by Qualtrics (Provo, Utah and Seattle, Washington, DC, USA). Participants could complete the survey using a computer, smartphone, or iPad with Internet access. IRB approval was obtained from Aurora Health Care in Milwaukee, Wisconsin, the University of Wisconsin School of Medicine and Public Health, and Meriter Hospital in Madison Wisconsin. Parents were recruited to take the survey predominantly in Madison, after an initial effort had limited success in Milwaukee. Consent was obtained online from each participant before they began the survey. Methodologic Elements to Support the Objectives To increase the utility of the study for policy making, we included several innovations in the design. These resulted from sequential quality improvement efforts to create a user-friendly, unbiased survey of parental opinions during the first six postpartum months. Embedded Explanatory Videos Preference and opinion surveys often present several sentences of background information to read before asking questions. During our survey instrument's development, we became concerned about the amount of text that would be needed before asking key questions. We therefore created three animated video clips embedded between sections of the survey (Figures 1-3). The videos featured an animated character, Nurse Maria, who explained the basics of NBS and presented different scenarios for disclosure of NBS results. The videos were scripted in stages to support a careful order of survey questions, as described below. Each video lasted about 5 min. The language was assessed and determined to be appropriate for those with an eighth-grade education. The video's script and graphics were drafted and vetted with a variety of parents and NBS educators so that they would be accessible to participants regardless of prior education and medical experience. The animations were revised and pilot tested before routine use in this study. All videos were uploaded to YouTube for embedding within the Qualtrics webtool. To our knowledge, this is the first time an Internet-based educational video has been used in an NBS-related survey. Substitution of a Generic Disease X instead of CF Our experience with previous surveys suggested that community respondents would have varied knowledge about CF, and we grew concerned that this heterogeneity might have an unpredictable influence on summarizing analyses. We therefore substituted for CF a fictitious "Disease X" with symptoms and implications that are very similar to CF. We also felt that the Disease X substitution would be useful for generalizing the study to other genetic conditions included on NBS panels. We developed explanations for autosomal recessive carrier status for Disease X and also created an analog for CRMS/CFSPID called "Mild X". Vignettes and Complementary Modes for Preference Questions We considered a variety of approaches to the vignettes and questions and settled on a method from experimental psychology called an imagination exercise, in which respondents would be presented with a vignette and asked to imagine themselves in the position of a character in the story. The first vignette asked the respondent to imagine that at the same time her/his baby was born, a best friend named Tonya had a baby (Natalie) who was diagnosed with Mild X. Tonya conveys to the respondent all the information about Disease X and Mild X, and then the Nurse Maria character explains about the three policies in Table 1. After the video, the parents were asked to rank the three policies in The video's script and graphics were drafted and vetted with a variety of parents and NBS educators so that they would be accessible to participants regardless of prior education and medical experience. The animations were revised and pilot tested before routine use in this study. All videos were uploaded to YouTube for embedding within the Qualtrics webtool. To our knowledge, this is the first time an Internet-based educational video has been used in an NBS-related survey. Substitution of a Generic Disease X instead of CF Our experience with previous surveys suggested that community respondents would have varied knowledge about CF, and we grew concerned that this heterogeneity might have an unpredictable influence on summarizing analyses. We therefore substituted for CF a fictitious "Disease X" with symptoms and implications that are very similar to CF. We also felt that the Disease X substitution would be useful for generalizing the study to other genetic conditions included on NBS panels. We developed explanations for autosomal recessive carrier status for Disease X and also created an analog for CRMS/CFSPID called "Mild X". Vignettes and Complementary Modes for Preference Questions We considered a variety of approaches to the vignettes and questions and settled on a method from experimental psychology called an imagination exercise, in which respondents would be presented with a vignette and asked to imagine themselves in the position of a character in the story. The first vignette asked the respondent to imagine that at the same time her/his baby was born, a best friend named Tonya had a baby (Natalie) who was diagnosed with Mild X. Tonya conveys to the respondent all the information about Disease X and Mild X, and then the Nurse Maria character explains about the three policies in Table 1. After the video, the parents were asked to rank the three policies in relation to this scenario. Policies had to be ranked in different positions (first, second, third), but parents were given the option to leave policies unranked. Next, the respondents were asked "Do you think MOST parents would share your opinion about the policies?" and given two options; "Yes, I think more than half of all parents would share my opinion" and "No, I think that one of the other two policies would be better for most parents (you will be asked which policy in the next question)". Respondents who selected the latter choice were given the policies again and asked "Which policy do you think would be best for most parents of infants with a Mild X result?" The second vignette reprised the Tonya and Natalie story, but with Natalie diagnosed with genetic carrier status for Disease X, and Nurse Maria explaining carrier status using an animated Punnett square. After the video, respondents were given the same ranking task for placing themselves in Tonya's position and whether more than half of all parents would share their opinion, and if not then another ranking task for "most parents". After respondents were asked about their own preferences and their opinions about "most parents", we used three slider questions to compare how important different values were to each other such as autonomy compared with deferring to a clinician expert. Sample and Recruitment Eligible participants were parents of infants up to six months of age regardless of medical history. Fluency in English was required. The study began with a plan to recruit two samples of parents in the state of Wisconsin, beginning with one phase in Milwaukee and then proceeding to another in Madison. The Milwaukee recruiting strategy used fliers at a maternity hospital and clinics that served a poor urban population that is mostly African American. However, due to limited resources for recruiting, the Milwaukee phase served primarily as a pilot testing effort while resulting in six respondents. The Madison phase used recruiting fliers distributed in person at a popular store selling products for infants and at parent education classes located at a hospital with a large and diverse obstetrical population. Participants were told that the survey would take approximately 20 min to complete. As a gratuity for participation, respondents were offered a $10 retail gift card. The contact information for sending the gift card was obtained in a separate survey that was not linked to the subjects' responses on the survey questions. Data Management and Statistical Analyses During the final analyses, descriptive statistics were derived for both parent and child characteristics and frequency information from items evaluating experience with NBS. The proportion of parents ranking each policy first, second, and third was obtained separately for the Mild X and Disease X carrier status scenarios and was reported with Wilson 95% confidence intervals. Descriptive statistics were reported for the continuous value comparison variables and one-way ANOVA was used to evaluate differences in value scores between parents who ranked the Fully Informed, Withhold, and Parents Decide policies first. Statistical significance was determined using two-tailed tests with α = 0.05. All data were analyzed using JMP software (SAS Institute, Cary, NC, USA). Sample Characteristics A total of 213 surveys were started, including 11 in the Milwaukee phase and 202 in the Madison phase. Of those, 35% (4 and 81, respectively) were excluded because the participant stopped early, or generated a response that was too incomplete for analysis, or completed the survey in under 1000 s, suggesting that the subject didn't watch the entire duration of the video clips. Although these responses contributed some information, we decided as a stringent quality control requirement to accept only complete responses. The final sample included 128 respondents (60.1% of surveys begun). The median duration for the included surveys was 1406 s (IQR = 1007 s), not counting four outliers who left the survey open for more than 30,000 s. The mother was the respondent in 81.3% of surveys. The median respondent age was 33, while the median infant age was 2 months. Further descriptive data are shown in Table 2. In general, this was a well-educated sample of white married women. However, their knowledge about NBS was limited; 20% of respondents knew nothing or very little about NBS, despite their infant having been screened only a few months before, and 66% wished that they had known more. Thus, information on NBS policy options was new to this group, which we considered an advantage in this survey. Reaction to Animated Video Survey Format Reactions to the Nurse Maria videos were favorable among those who finished the survey, with 92% of respondents agreeing or strongly agreeing that they liked the videos, and 98% agreeing or strongly agreeing that the "videos explained things in a way that was easy to understand". Similarly, 98% agreed or strongly agreed that "the videos were better than reading several long paragraphs". In view of the well-educated nature of the sample, these responses are a significant finding of this study. Disclosure Preferences Parents' rankings of NBS disclosure policies were analyzed separately for both the Mild X and Carrier X scenarios, and for each of two questions: "If you had been in (the vignette), which of the three policies would you have preferred for yourself and your baby?" and "What do you think would be best for most parents of infants with (condition in the vignette)?" The proportion of respondent rankings for these four analyses are shown in Figure 4 where the top-ranked policies are compared (error bars are Wilson confidence intervals). As seen in Figure 4, the Withholding Information policy was obviously less popular than the other two policies. It was more challenging to compare the Fully Informed and Parents Decide policies, but there appeared to be a marginal trend favoring Parent Decide. Several other analyses shed additional light on this situation. As shown in Figure 4, respondents began by describing what they would have wanted in the Mild X vignette for themselves and their infants. The next three vignettes allowed us to investigate how respondents changed their preferences in different situations. Between 10-25% of parents changed a preference when asked about the Carrier X vignette, or when opining about what would be best for other parents. Five parents (4.2%) who began with a Fully Informed or Parents Decide preference for Mild X answered that Withholding Information would be better for other parents. Nine parents (7.6%) who began with Fully Informed or Parents Decide for Mild X answered that Withholding Information would be better for themselves in the Carrier X vignette. confidence intervals). As seen in Figure 4, the Withholding Information policy was obviously less popular than the other two policies. It was more challenging to compare the Fully Informed and Parents Decide policies, but there appeared to be a marginal trend favoring Parent Decide. Several other analyses shed additional light on this situation. Figure 4. Rankings of disclosure policies for Mild X and Disease X carrier status. Results shown are from respondents asked to rank their preferences for disclosure of policy options that could be implemented by caregivers for either a Mild X condition analogous to CRMS/CFSPID or Disease X like CF. The intent of this exercise was to learn what parents preferred and what they were opposed to as well: clearly, the Withholding Information policy option. As shown in Figure 4, respondents began by describing what they would have wanted in the Mild X vignette for themselves and their infants. The next three vignettes allowed us to investigate how respondents changed their preferences in different situations. Between 10-25% of parents changed a preference when asked about the Carrier X vignette, or when opining about what would be best for other parents. Five parents (4.2%) who began with a Fully Informed or Parents Decide preference for Mild X answered that Withholding Information would be better for other parents. Nine parents (7.6%) who began with Fully Informed or Parents Decide for Mild X answered that Withholding Information would be better for themselves in the Carrier X vignette. Respondents' preferences for the individual policies are compared with the data in Table 2 and other variables obtained through the survey. Respondents who had reported being the primary caregiver for the baby were more likely to vote for the Parents Decide policy (p = 0.006, Wilcoxon) or full disclosure policy (p = 0.035, Wilcoxon). Respondents with newer infants were less likely to vote in favor of the Parents Decide policy (r = −0.19, p < 0.035). A vote in favor of the Withholding Information policy was less likely for parents who recalled being told about the NBS result. Table 3 and Figure 5 depict the three value comparison questions with the latter showing the median (interquartile range) responses for the sample indicated. When comparing the importance of being Fully Informed to reducing emotional distress (Comparison A), parents gave preference to autonomy at the risk of becoming unnecessarily alarmed ( Figure 5A). Likewise, when weighing the importance of autonomy in decisionmaking versus reducing emotional distress (Comparison B), parents preferred the state- Figure 4. Rankings of disclosure policies for Mild X and Disease X carrier status. Results shown are from respondents asked to rank their preferences for disclosure of policy options that could be implemented by caregivers for either a Mild X condition analogous to CRMS/CFSPID or Disease X like CF. The intent of this exercise was to learn what parents preferred and what they were opposed to as well: clearly, the Withholding Information policy option. Value Comparison Respondents' preferences for the individual policies are compared with the data in Table 2 and other variables obtained through the survey. Respondents who had reported being the primary caregiver for the baby were more likely to vote for the Parents Decide policy (p = 0.006, Wilcoxon) or full disclosure policy (p = 0.035, Wilcoxon). Respondents with newer infants were less likely to vote in favor of the Parents Decide policy (r = −0.19, p < 0.035). A vote in favor of the Withholding Information policy was less likely for parents who recalled being told about the NBS result. Table 3 and Figure 5 depict the three value comparison questions with the latter showing the median (interquartile range) responses for the sample indicated. When comparing the importance of being Fully Informed to reducing emotional distress (Comparison A), parents gave preference to autonomy at the risk of becoming unnecessarily alarmed ( Figure 5A). Likewise, when weighing the importance of autonomy in decision-making versus reducing emotional distress (Comparison B), parents preferred the statement consistent with autonomy ( Figure 5B). However, when choosing between autonomous decisionmaking without all pertinent details or allowing someone who is knowledgeable to make decisions (Comparison C), parents showed a slight preference for deferring to someone who knows all necessary information ( Figure 5C). Value Comparison We also explored value scores based on which policy parents ranked first for the Mild X and Disease X carrier status scenarios. All ANOVA results showed significant differences in mean value scores between first-rank policy groups except for Comparison C value scores between first-rank policy groups for Disease X carrier status. For Comparison A (comparing the importance of being fully informed to reducing emotional distress), parents who ranked the Fully Informed policy first most strongly favored being fully informed, followed by those who ranked the Parents Decide policy first, and finally by those who ranked the withhold option first. This pattern was present for first-rank policy groups from both the Mild X and Disease X carrier scenarios. For Mild X, all Hochberg's GT2 post-hoc tests were significant except the Withholding and Parents Decide groupings for Mild X. Similarly, in Comparison B (comparing the importance of autonomy in decisionmaking versus reducing emotional distress), parents who gave the Fully Informed policy a first-place ranking most strongly favored autonomy, followed by those who ranked the Parents Decide policy first, and finally by those who ranked the withhold policy first. For Disease X carrier status, all Hochberg's GT2 post-hoc tests were significant except the Fully Informed and Parents Decide groupings. Compared to those who favored Fully Informed and Parents Decide, parents who ranked Withholding Information first in either scenario had average value scores closest to the withholding statement in Comparisons A and B. Even so, the Withholding Information group averages did not reflect a strong affinity for the withholding statement and tended to indicate a neutral attitude or even slight preference for the opposing statement. Regarding Comparison C (choosing between autonomous decision-making without all pertinent details or allowing someone who is knowledgeable to make decisions), the average value scores for all groups were near the midpoint, with the exception of the Parents Decide groups that slightly favored deferring decision-making to someone else. There were no significant differences in Comparison C value scores for Mild X first-rank policy groups. Discussion There are three main options for policy regarding informing parents about IFs after newborn screening (Table 1), each with its advantages/benefits, disadvantages, and po-tential value for society. It is ideal for screening policy decisions to incorporate parental perceptions, but the literature provides a mixture of views, along with varying designs and sample populations [15,[25][26][27][28][29][30][31][32][33][34]. This study examined preferences for disclosure of NBS results among generally well-educated parents who recently experienced the NBS process, thus seeking their policy preferences in an ideal timeframe when NBS might be fresh in their minds. In this sample, the policy of withholding IFs (for the purpose of reducing unnecessary distress) was unpopular for both scenarios, although this strategy is often advocated for among clinicians. Applying this policy in NBS as in Norway [4] can be challenged, particularly when there are benefits to knowing your genetic status [23]. In reaction to videos describing a Mild X condition analogous to CRMS/CFSPID, many parents favored policies that kept them fully informed or allowed them to determine whether to receive IFs. This confirms the wisdom of clinical practice recommendations that encourage full disclosure about this condition and the importance of longitudinal follow-up evaluations [11,12,35]. Although incidental findings related to CF were the focus of this study, the generic nature of the video contents might allow these preferences to be informative for disclosure of NBS results beyond CF. If further supported by future study and commentary, the onus would be on NBS programs and their funding providers to mitigate harm following disclosure. Distinguishing parents' preferences between the Fully Informed and Parents Decide policies is challenging. In the case of Mild X, more parents ranked the Parents Decide policy option first than ranked the Fully Informed policy first, but for Disease X carrier status, the Fully Informed option was slightly more popular than the Parents Decide policy as a top choice. This may mean that parents believe universal disclosure is less critical for Mild X (CRMS/CFSPID) than for Disease X/CF carrier status, but further study is warranted before such a conclusion could be made. However, the notion that parents may have different preferences for different categories of incidental findings raises the possibility of hybrid policies where certain results are always disclosed, and others are optional. The value comparison results were largely consistent with policy preferences; parents favored autonomy and being fully informed at the risk of experiencing emotional distress. While we did not probe participants about why they were willing to endure distress, others have reported a sense of obligation or duty among parents in similar situations [31]. Interestingly, even the small number of parents who ranked the Withholding Information policy first did not strongly endorse the withhold statements in value comparison questions. This suggests that perhaps those who favor the withhold policy have high regard for being fully informed and maintaining autonomy but are influenced by other factors to choose the Withholding Information option. Given concerns about the capacity of NBS programs and practitioners to prepare parents to make informed decisions about IFs, we gave special attention to time and resource limitations in value comparison C. Statements in this comparison were written to reflect the possibility that there may not be time to teach parents all relevant information before a health decision needs to be made. On one end, parents could choose to maintain autonomy without all pertinent details, and on the other end, they could defer the health decision to someone who knows all the details. After favoring autonomy in Comparison B, this sample was more inclined to defer decision-making in Comparison C, indicating that knowledge, rather than personal control alone, was important to them. This is a positive indication that parents will understand the difficulties inherent to teaching/learning about IFs as the era of NGS evolves. Although one might argue that parents should not be the sole determinants of the child's interest in learning about IFs, practical considerations have led to the parent-child dyad as being responsible for this information transfer. In fact, counseling resource limitations make it difficult to engage professional experts in this aspect of NBS follow up communications. Our study was successful in employing a novel video survey design to deliver complex genetic and clinical information to the public. Thus, it adds to the previous NBS-related research on parental preferences by providing survey methodology that is more user-friendly than reading "several long paragraphs". Nearly all parents found the contents understandable and more engaging than a conventional written survey format. Although technical expertise is required for video design and creation, this model should be considered for future studies with non-medical populations and perhaps for parent education in association with NBS rather than the traditional brochures. In connection with this, the first video that can be accessed through Figure 1 provides a succinct, 2-min explanation of all aspects of the NBS process. A limitation of this study is the use of a convenience sample made up mostly of American mothers from a single community that selects for those willing and able to attend a voluntary class in the middle of the day. The sample was disproportionately white, well-educated, and married, all of which limit the generalizability of our results. The homogeneity of the sample was identified during preliminary analyses, after which the research team explored adding more recruitment sites that traditionally serve low-income and minority populations, such as public health departments administering the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Unfortunately, we were unable to secure new collaborations. Past research on adults regarding their desire to learn about IFs discovered in genetic testing, including carrier status, has found little association between sociodemographic and literacy factors and preference for disclosure [32][33][34]. Therefore, our results may be similar with a more diverse population, although this remains a topic for further study. Despite these limitations in generalizability, our study extends previous observations about parental preferences [15,[26][27][28][29][30] by its comparison of reactions to information disclosure about a potentially severe disease such as CF (Disease X) with a mild condition such as CRMS/CFSPID, and by incorporating three policy options into the survey, in addition to contributing a user-friendly video survey option to the range of methodologies available. Although there was less interest in the Parents Decide disclosure option with Disease X, the respondents clearly were opposed to withholding information on carrier status, even if the condition is mild. Policymakers need to keep this in mind as NGS-based screening expands, requiring both ethical [18,23] and practical issues [17] need to be addressed. Thus, another implication of our study is that valuable parental input can be obtained about policy options with user-friendly, efficient methods prior to widespread implementation of NGS. Although some may argue that parental input should not be considered in formulating disclosure policies about IFs from screening tests, people participating in healthcare systems have a right to be engaged in the sharing of health-related, relevant knowledge, and NBS is a hybrid of public health and healthcare. The strong preference for autonomy that was identified in this survey underscores the importance of that ethical principle. Author Contributions: M.H.F. conceptualized and designed the survey questions and video scripts, contributed to data collection and analysis, drafted sections of the manuscript, and then reviewed, extensively revised, and approved the manuscript. K.E.M. visited and communicated with recruitment sites, performed data collection, cleaning and analyses, drafted the initial manuscript, reviewed, revised, and approved the manuscript. A.L. participated in the design of the study, supervised data collection as well as its analysis and interpretation, and reviewed, revised, and approved the manuscript after drafting sections on methodology. P.M.F. secured partnerships with recruitment sites after participating in the final design, drafted sections of the initial manuscript, and then reviewed and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Consent was obtained from each participant before they began the survey.
2022-09-30T15:01:35.922Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "f3122074260287695592b1c738bedae778824473", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2409-515X/8/4/54/pdf?version=1664261511", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6db82a50100a2fd2403195e1f0191dffee9a89fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246436347
pes2o/s2orc
v3-fos-license
Study on Dynamic Constitutive Model of Polypropylene Concrete under Real-Time High-Temperature Conditions : Polypropylene (PP) concrete, a kind of high-performance fiber-reinforced concrete, is widely used in large concrete structures. Studies on the dynamic mechanical properties of polypropylene concrete under temperature–impact load can provide a theoretical basis for research on the structural stability of concrete structures during fires, explosions, and other disasters. The purpose of this paper was to study the dynamic mechanical properties of polypropylene concrete under real-time high-temperature conditions and to establish a dynamic damage constitutive model for polypropylene concrete under real-time high-temperature conditions. In this paper, Split Hopkinson Pressure Bar (SHPB) equipment was used to test the dynamic mechanical properties of polypropylene concrete with different high strain rates under different real-time high temperatures (room temperature, 100 °C, 200 °C, 300 °C, 400 °C, 500 °C, 600 °C, 700 °C, and 800 °C). A modified “Z-W-T” model was used to determine the recursion of the dynamic damage constitutive model of polypropylene concrete under different temperature–impact loads, and the model was compared with the experimental data. The results show that the thermal conditions influenced the chemical composi-tion and microstructure of the polypropylene fiber concrete, which was why the high temperatures had a strong influence on the dynamic mechanical properties of polypropylene concrete. When the heating temperature exceeded 300 °C, although the polypropylene concrete specimen was still able to maintain a certain strength, the dynamic mechanical properties showed a deterioration trend as the temperature increased. The comparation between the experimental data and the fitting curve of the dynamic damage constitutive model showed that the dynamic stress–strain curves could be well matched with the fitting curves of the dynamic damage constitutive model, meaning that this model could describe the dynamic mechanical properties of polypropylene concrete under different real-time high temperatures well. Introduction In recent years, a new tendency to add different kinds of materials [1,2], such as carbon nano-fibers [3,4], steel fibers [5], and polypropylene fibers [6], to concrete to improve its mechanical properties has been observed. Polypropylene (PP) fiber concrete [6], a type of high-performance fiber-reinforced concrete, is characterized by its high level of toughness and high tensile strength. In addition, polypropylene fiber-reinforced concrete has less of a risk of cracking when exposed to high temperatures, a benefit from the internal steam pressure caused by the fusion temperature of the polypropylene fiber (170 °C) [7]. The above characteristics make polypropylene fiber concrete appropriate for widespread use in large concrete structures [8], such as in nuclear power protection facilities, military protection facilities, and airport runways. Explosions often occur when concrete structures experience fire-related disasters [9,10], which may engender instability and even the collapse of concrete structures due to the high temperatures caused by the flames and the dynamic load caused by the explosions [11]. Studying the dynamic mechanical properties and dynamic damage evolution relationship of concrete materials under high temperatures is of great significance to improve the fire resistance and explosion resistance of concrete structures, allowing the structure's security requirements as well as national defense requirements to be satisfied. There have been increasing concerns surrounding how the mechanical properties of polypropylene fiber concrete react to thermal effects in recent years. Some related research results have shown that the mechanical properties of concrete specimens are obviously improved when they contain fibers [12][13][14]. Moreover, the dynamic mechanical properties of polypropylene fiber concrete, such as dynamic compressive strength [15], dynamic tensile strength [16,17], and the dynamic elastic modulus [7], have an obvious thermal effect when affected by high temperatures. In the meantime, some research results have shown that since polypropylene fiber concrete is a brittle material, it is highly sensitive to strain rate [18], and the coupling effect of the thermal and dynamic loads is not a simple summation relationship [19]. Consequently, the thermal effect and strain rate effect cannot be ignored when studying dynamic damage evolution in polypropylene fiber concrete under high temperatures because of the different mechanical properties of polypropylene fiber concrete prepared under different engineering backgrounds [20] due to the structural characteristics of polypropylene fiber concrete. Since Zhu's [21,22] proposal of an improved Z-W-T non-linear viscoelastic model on the basis of the dynamic mechanical properties of polymers in 1981, the model has been extended to the expression of the mechanical properties of other materials [23][24][25]. Through further research, other scholars found that the Z-W-T non-linear viscoelastic model can not only better describe the dynamic response phenomenon of soft materials such as polymers, but it can also describe the dynamic mechanical properties of concrete materials [26]. Subsequently, some scholars began to explore a dynamic damage constitutive model of concrete materials with fibers and concrete materials affected by thermal conditions based on those of the Z-W-T non-linear viscoelastic model. Fu [18] studied the dynamic compression behavior of basalt-polypropylene hybrid fiber (HBPRC) and concrete with different matrix strengths was studied by using the split Hopkinson pressure bar, and a dynamic damage constitutive model was proposed based on the principle of damage variable, with the research results showing that the dynamic damage constitutive model that he proposed was better able to describe the stress-strain curves obtained from the experiment. ZHAI [27] studied the influence of high-temperature cold damage on the mechanical properties of concrete and proposed a dynamic nonlinear elastic constitutive equation that considers the cooling effect, with the fitting results of that equation being very close to the experimental results, meaning that it is considered to be able to describe the dynamic mechanical properties of concrete under the corresponding conditions. To sum up, although the dynamic mechanical properties and dynamic damage evolution models of polypropylene fibers affected by thermal conditions are receiving increased attention, most concrete specimens are heated and then naturally cooled to room temperature during specimen treatment processes in the present research. Moreover, research on the dynamic damage evolution relationship of polypropylene fibers affected by high temperatures also tends to consider the temperature damage and dynamic load damage as a whole, and research on the dynamic damage evolution relationship of polypropylene fiber concrete that considers the coupling effect of temperature and dynamic impact load has rarely been reported upon. Accordingly, in this paper, a dynamic damage constitutive equation for polypropylene fibers that considers the effects of the temperature and strain rate was constructed. Additionally, modified SHPB dynamic mechanical test equipment was used to test the dynamic impact compression of PP concrete with different impact air pressure grades (0.4 MPa, 0.6 MPa, and 0.8 MPa) under different real-time temperature grades (room temperature, 100 °C, 200 °C, 300 °C, 400 °C, 500 °C, 600 °C, 700 °C, and 800 °C) to explore the influence of thermal and impact loading on the dynamic mechanical properties of polypropylene (PP) concrete under real-time high-temperature conditions and the dynamic damage evolution relationship of polypropylene fiber concrete in the corresponding environments. In addition, the dynamic damage constitutive equation that was determined for polypropylene fiber was compared to the experimental results to verify the correlation of the dynamic damage constitutive equation. The Modified Dynamic Constitutive Model of Polypropylene Concrete The main expression of the Z-W-T non-linear viscoelastic model (Equation (1)), which is shown in Figure 1a, consists of two parts [28]: one is the transient response element, which is unrelated to the strain rate (E0 in Figure 1a, part I in Equation (1)), and the other is the transient response, which is related to the strain rate and is composed of two Maxwell elements (E1 and E2 in Figure 1a, part II in Equation (1)). The Modified Z-W-T Nonlinear Viscoelastic Model The Z-W-T nonlinear viscoelastic model can be improved by considering how the dynamic mechanical properties of polypropylene concrete are affected by temperature and impact load under real-time high temperatures: 1. The initial stage of the stress-strain curve of concrete under impact loading was nearly linear elastic [29], meaning that part I of Equation (1) can be approximately converted into a linear polynomial, as shown in Equation (2). 2. Part Ⅱ of Equation (1) consists of two Maxwell element relaxation functions and have large differences in the relaxation time (θ1 and θ2), where the Maxwell element with relaxation time θ1 describes the mechanical behavior of the material at a low strain rate, and the Maxwell element with relaxation time θ2 describes the viscoelastic behavior of the material at a high strain rate (the order of magnitude for θ1 and θ2 are 10~10 2 s and 10 −4~1 0 −6 s, respectively). Other studies have shown that the mechanical properties of concrete material were obviously affected by the strain rate especially and they were sensitive at high strain rates [26].The strain rate of polypropylene concrete under the impact load 10 2 s −1 [30] resulted in a short observation time; in this case, the low-frequency Maxwell element could not be relaxed, showing linear springs characteristics, whereas the Maxwell element with a relaxation time of θ2 described the viscoelastic mechanical behavior of the material under high strain rate conditions. Therefore, a simple spring can be used to replace the low-frequency Maxwell element in the Z-W-T non-linear viscoelastic model in this situation (Equation (3)), and under a high strain rate, the Z-W-T nonlinear viscoelastic model can be expressed by Figure 1b. Equation (4) shows the equivalent treatment of two parallel elastomers and the adjusted Z-W-T nonlinear viscoelastic model (Equation (5)) is represented in Figure 1c. Although the final expression form of the modified Z-W-T nonlinear viscoelasticity model was similar to that of the Kelvin-Voigt model [31], the derivation processes of the two models were not the same. 3. As a heterogeneous material, polypropylene concrete contains a large number of random polypropylene fibers and pores [32]. Therefore, damage factors should be considered when studying dynamic damage constitutive models of polypropylene concrete under impact load and thermal conditions. From the perspective of continuous damage mechanics, polypropylene concrete is assumed to be a continuous medium [33]. The composite damage amount D was introduced to measure the degree of damage experienced by the polypropylene concrete. In this case, Equation (6) describes a relationship according to the principle of strain equivalence [34]. Additionally, the modified Z-W-T non-linear viscoelastic model with the damage variable D (Equation (7)) can be obtained by substituting Equation (5) into Equation (6). where σa represents effective stress, σr represents the nominal stress, and D represents the damage variable. Damage Variable The thermal damage to polypropylene concrete can be described using the damage accumulation method according to both the macroscopic mechanical properties and internal structure variation characteristics of the polypropylene fiber concrete sample effected by thermal conditions and the method used in heat transfer theory [10]. In the current study, there was not a uniform description method for damage accumulation, so the descriptors from [35][36][37][38] were used to evaluate the thermal damage. In this paper, the commonly used elastic modulus ratio method was used to describe the thermal damage imparted on the structural characteristics of polypropylene concrete and the relevant parameters that were tested in real-time high-temperature experiments. The thermal damage DT was calculated according to Equation (8): where T D represents the thermal damage factor, and T E and 0 E represent the dynamic elastic modulus of polypropylene concrete under high-temperature conditions and at room temperature, respectively. It is well documented [39][40][41] that when materials fail under impact load, it is a time process where microdefects or microdamage evolve at a certain rate, meaning that the size of the broken material after damage decreases as the loading strain rate increases [42], which is caused by the increase in the number of microcracks that develop during failure. The experimental dynamic impact compression results for polypropylene fiber concrete showed that the damage and failure of the polypropylene fiber concrete under the impact load also conforms to this rule and shows obvious strain rate effect, indicating that the damage and failure of polypropylene fiber concrete under the impact load have a direct relationship with the strain rate. Because of this, the damage caused by the dynamic impact load on polypropylene fiber concrete can be regarded as a function that is related to the strain rate, regardless of whether the sample was under constant strain rate loading. Therefore, it was assumed that the damage degree of the internal differential element obeyed the Weibull distribution, which was used to express the damage that the polypropylene concrete experienced under an impact load that was determined based on references [43][44][45]: D = (9) where DM represents the damage variable, and Nt and N represent the damage number and total number of differential elements in concrete under a certain state, respectively. It is necessary to assume that the probability density of the Weibull distribution follows the relationship in Equation (10): (10) where F represents the strength of the micro-element, which can be calculated by referring to reference [46], and m and F0 represent the parameters related to the material properties that can be calculated by the peak limit method [47]. The random variation interval of a differential element (assumption[ε, ε + dε]) can be used to characterize the change in the damage variable with the change in the stress state according to the assumption of the density function of the differential element in the basic assumption. It is logical that the number of damaged differential elements in a certain state can be expressed as Equation (11). Thus, the damage equation of polypropylene concrete under the impact load is shown in Equation (12): The results obtained in [40] show that when considering the combined effect of temperature damage and dynamic load damage on polypropylene concrete specimens, the two kinds of damage cannot be directly superimposed on each other, but the coupling effect of the temperature damage and the dynamic load damage should be considered. The definition of the coupling effect of the thermal damage and dynamic load can be referred to as the stress equivalence principle [34]: The final form of the damage factor can be obtained by substituting the determined thermal damage (Equation (8)) with the dynamic load damage (Equation (12)): The dynamic constitutive model of polypropylene concrete that considers the thermal effect can be obtained by substituting the determined damage factor into the modified Z-W-T non-linear viscoelastic model: where F represents the strength of the micro-element, which can be calculated according to [46], m and F0 represent the parameters related to material properties that can be calculated by the peak limit method [47], and the other parameters can be obtained through fitting with the experimental data. The experimental condition under dynamic stress equalization could be regarded as constant strain rate loading in the SHPB experiment, in this case, the definite integral part (low-frequency Maxwell element) of Equation (15) could be simplified. In addition, Equation (15) could be simplified to Equation (16), subsequently. Experiment and Results The purpose of the experiment was to explore the dynamic mechanical properties of polypropylene fiber concrete under real-time high temperatures and to determine relevant parameters for the dynamic damage constitutive model of polypropylene fiber concrete effected by thermal conditions. The improved SHPB dynamic mechanical property test was carried out to test the dynamic impact compression of C30 PP concrete with different average impact velocity grades under different real-time temperature grades (200 °C, 400 °C, 600 °C and 800 °C). The independent experiments were repeated three times under each engineering condition, and meaningful data were selected for the statistics in order to ensure that the experimental data were scientific. In addition, experiments were also carried out at room temperature under each average impact velocity grade, serving as a control group. Experimental Material For these experiments, Chinese national standard P.O.42.5 Portland cement produced by Yunnan Hongshi cement Co., LTD was adopted. High-quality grade II fly ash with a density of 2098 kg/m 3 was produced by Yunnan Power Plant from Yunnan Province was used for the experiments. The following water-binder ratio was adopted for all mixes: 0.44. The characteristics of the polypropylene fibers used in this experimental are shown in Table 1. The proportions of the concrete samples are described in Table 2. The polypropylene fiber concrete specimens were made into φ75 mm × 40 mm cylinder specimens [48], as shown in Figure 2. Experimental Equipment The SHPB equipment conducted in the experiment is shown in Figure 3. The incident rod and the transmission rod were Φ2000 mm × 75 mm in size. The spindle bullet was used to adjust the waveform shape. The material parameters of the rod are given in Table 3. In the real-time high-temperature SHPB experiment, the SHPB equipment was modified, and the insulation device was installed in the same position as the loading specimen. The strain gauges were positioned 1.8 m and 0.6 m away from the contact of the specimen at the incident rod and transmission rod, respectively. In this experiment, the XH7L-12 muffle furnace produced by Zhengzhou Xinhan Instruments and Equipment Co., Ltd. was used to heat the specimens. The rated power was 5 kW, the voltage was 380/220 V, the maximum working temperature was 1200 °C, and the temperature control accuracy was 1 °C-3 °C. The furnace used special ceramic fiber materials and composite fiber materials, which were characterized by their fast heating rate, and these fibers were 300 mm × 200 mm × 120 mm (length * width * height) in size. The Solution of Real-Time High-Temperature Experiments During the heating process, the heating rate was set to 5 °C/min, which was chosen based on the heating method used for concrete specimens in the literature [43], and the constant temperature was maintained for 2 h when the temperature reached the set temperature grade. The heating curves are shown in Figure 4. It is well documented [49][50][51][52] that concrete is a material with poor thermal conductivity compared to other materials, such as metals, and that the temperature variation rate of concrete is much less than that of metal materials. The thermal conductivity and heat loss time were considered in order to ensure that the temperature of the concrete specimens will still be able to satisfy the requirements of the SHPB experiment. During the SHPB test, the asbestos-wrapped specimen shown in Figure 3 was loaded on the experimental platform to reduce heat loss from the specimen into the surrounding air, ensuring that the specimen was in a relatively sealed environment, meaning that the experimental loading processes was able to be maintained at a constant temperature. The time from the completion of heating to the completion of the SHPB experiment was measured. During the high-temperature SHPB experiment, the specimen was taken out of the furnace after it had been heated until it reached the end of the SHPH test bench, about 30 s. The impact experiment was then carried out. The whole process, from the preparation to the end of the experiment, was about 20 s in total. Therefore, the time from which the specimen was taken out of the furnace to the end of the experiment was about 50 s. Because the specimen lost heat during the process, the heating temperature should be adjusted; that is, the heating temperature should be adjusted according to 101~105% of the corresponding temperature level determined in the experimental scheme. In order to verify the real-time high-temperature solution, the temperature of the concrete specimens was measured under each of the four temperature groups. As shown in Table 4, the temperature of the bullet impact specimen during the experiment was 32 °C lower than that of the specimen after loading. The temperature variation law of concrete specimens at various stages of the experiment was obtained. It was determined that the temperature of the specimen that had been heated by the electric furnace was regarded as the temperature of the whole impact experiment. Experiment Theory As shown in Figure 5, during the SHPB experiment, the stress wave and the marked incident wave ( ) I t ε were produced by the impact transmitting along the axial direction of the bar [53]. Some stress waves are reflected at the S1 interface and propagate in the opposite direction to form reflection waves ( ) R t ε , and the residual stress waves continue to propagate through the specimen to form transmission waves were collected by strain gauges A and B, which were attached to the incident rod and transmission rod, respectively. The average stress, strain rate, and strain of the specimen could be calculated by Equation (17) [54]. where A, E, and C0 are the cross-sectional area, the Young's modulus, and the wave velocity of the bar material, and ls and As are the length and cross-section of the specimen. Typical Waveform and Dynamic Stress Equalization The representative original voltage signal diagram for this experiment can be seen in Figure 6. It can be observed from Figure 6 that the loading waveforms show a semi-sinusoidal shape and that the curves are smooth without any obvious waveform diffusion being observed. The incident wave also basically returns to the origin, and the reflected wave has a relatively smooth section, which meets the constant strain rate loading conditions [53,55]. The method proposed by YIN [56] and shown in Equation (18) was applied to evaluate the dynamic stress equalization verification for the SHPB experiment of the fusiform bullet. Whether stress equalization was achieved in the SHPB experiment could be assessed by comparing the curve trends in P1 and P2. represent the incident wave, reflection wave, and transmission wave, respectively. P1 and P2 represent the stress of S1 and S2, respectively. A0 and E0 are the cross-sectional area and elastic modulus of the bar. A typical stress equalization stress curve is showed in Figure 7, where it can be observed that the evolution trend in the P1 and P2 curves was basically consistent, proving that the constant strain rate loading condition was satisfied. The Relationship between Impact Air Pressure Level and Impact Velocity The impact velocity was determined by the impact air pressure. In order to judge the average impact velocity at different impact air pressures, experiments were carried out under no-load conditions. Table 5 showed the impact pressure at different impact air pressure levels. Results of Experiment The dynamic stress-strain curves of polypropylene concrete under different impact velocities and different temperature grades are shown in Figure 8. For the convenience of description, the average impact velocity under the same impact pressure was determined to be the same in this paper. Due to the limitation of the length of this paper, only one set of experiments under each working condition was selected to draw the dynamic stressstrain curves. As the temperature level increases, the peak stress first increases and then decreases, and the slope of the linear elastic stage decreases continuously. Effect of Thermal Conditions on Dynamic Mechanical Properties of Polypropylene Fiber Concrete The average dynamic compressive strength and average elastic modulus of polypropylene concrete under different temperature levels and different impact pressure levels are showed in Figure 9. It can be observed from Figure 9a that under the action of temperature load, the dynamic compressive strength of polypropylene fiber concrete was not monotonically linear as the temperature grade increased. For example, with an impact air pressure 0.4 MPa (average impact velocity = 5.5 m/s), compared with the case at 25 °C, the dynamic compressive strength of polypropylene fiber concrete at 200 °C increases by 10.2%, and compared with the case at 200 °C, the dynamic compressive strength of polypropylene fiber concrete at 400 °C, 600 °C and 800 °C decreases by 8.73%, 31.14% and 72.59%, respectively. It seemed that there was a threshold temperature grade; when the temperature grade was lower than this threshold, the dynamic strength of polypropylene fiber concrete increased as the temperature grade increased, and when the temperature grade was higher than this threshold, the opposite situation takes place. In this experiment, the threshold was between 200 °C and 400 °C. It also can be observed from Figure 9b that the elastic modulus of polypropylene concrete under different temperature grades decreased as the thermal grade increased and that it reached its minimum at 800 °C. Although the dynamic elastic modulus showed an improvement when the impact pressure level increased, the strain rate effect of the elastic modulus is not as obvious compared to the strain rate effect of the dynamic compressive strength. Other relevant studies from the literature [56][57][58][59][60] have used microscopic observation technology to study the internal components, structure, and other microscopic aspects of fiber concrete exposed to high temperatures. The research results can explain the experimental phenomenon well: before the threshold temperature, as the temperature increases, part of the water that had been combined with the concrete decomposed due to the high temperatures, promoting the further hydration of the unhydrated cement particles [60]. The polypropylene fibers gradually dissolved, and the dissolved polypropylene fibers played a role in filling the original pores, improving the strength. When the temperature exceeded a certain range, due to the continuous escape of the water in the concrete, hy- Validation of Constitutive Model and Determination of Parameters The values of the m and F0 parameters in the dynamic damage constitutive model of polypropylene concrete at the different experimental conditions that were calculated are shown in Table 5. The other model parameters could be determined by using data fitting software combined with the experimental data after the determined values were substituted into the model. Figure 10 shows the fitting results of the dynamic damage constitutive model of polypropylene concrete in different conditions, and the parameter values that obtained by fitting were shown in Table 6. It can be observed from the fitting results in Figure 10 that the model can describe the dynamic stress-strain curves of polypropylene concrete, and the variation rules of the thermal effect and strain rate effect under different conditions were also consistent with the experimental data. However, the fitting effect was not consistent under each condition. Compared to the higher impact pressure level, the fitting effect of the model was better under the impact pressure grade of 0.4 MPa and the impact pressure grade of 0.6 MPa, which could be the result of the different concrete failure processes that the specimens experienced under the higher strain rates. Conclusions In this paper, the dynamic mechanical properties of polypropylene fiber concrete were tested under different thermal grades and different impact rates using improved SHPB equipment. A dynamic damage constitutive model of polypropylene fiber concrete that considered thermal effects was constructed according to the dynamic mechanical properties of polypropylene concrete under real-time high-temperature conditions. The main conclusions are as follows: A dynamic damage constitutive model of polypropylene concrete considering thermal effects based on the Z-W-T nonlinear viscoelastic model was established. The model considered the damage to polypropylene concrete caused by thermal conditions and the coupling effect of the temperature and impact load. There was an obvious thermal effect in the dynamic mechanical properties of polypropylene concrete. The dynamic elastic modulus of the polypropylene concrete decreased as the temperature grade increased. The effect of thermal on dynamic compressive strength was different: there was a threshold temperature grade, when the temperature grade was lower than this threshold, then the dynamic strength of polypropylene fiber concrete in-creased as the temperature grade increased, and when the temperature grade was higher than this threshold, the opposite situation takes place. In this paper, the threshold temperature grade was between 200 °C and 400 °C. Compared with the experimental data, it was determined that the model can describe the stress-strain curves of polypropylene concrete under different conditions well. The temperature effect and strain rate effect shown by the fitting curves are consistent with the experimental data.
2022-02-01T16:16:21.900Z
2022-01-29T00:00:00.000
{ "year": 2022, "sha1": "831c67662f9970bf68964f783d61c2cc77c568fe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/3/1482/pdf?version=1643537282", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ab65e22b136aec04c73cc0436f2152f9016d58b6", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
231662832
pes2o/s2orc
v3-fos-license
The Breast Tumor Microenvironment: Could Silicone Breast Implant Elicit Breast Carcinoma? Abstract Complications related to breast implants have received much attention recently. Breast implant-associated anaplastic large cell lymphoma, silicone-induced granuloma of breast implant capsule, and breast implant illness are the main complications reported in the medical literature. However, the literature contains limited evidence regarding the possibility of silicone implants eliciting breast carcinoma. In this manuscript, we propose a theory in which the immune response to silicone breast implant gel bleeding acts as a triggering point for tumor oncogenesis in breast tissue. This hypothesis is derived from our findings of a case of invasive and undifferentiated medullary carcinoma in a patient with a silicone breast implant. The following concepts have been used to support this theory: 1) silicone bleeding from intact breast implants; 2) metaplasia: an adaptation to injury and precursor to dysplasia and cancer; 3) T-cell dysfunction in cancer immunity; 4) inhibitory cells in the tumor microenvironment (TME); 5) morphogenesis and bauplan; and 6) concepts underlying medullary carcinoma. We propose that the inflammatory process in response to silicone particles in the pericapsular glandular tissue favors the development of cellular mutations in specialized epithelial cells. This reverse morphogenesis could have resulted in breast carcinoma of the medullary type in the present case. Introduction Complications related to breast implants have received much attention recently. About five years ago, breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) gained attention in the mainstream media as a severe complication of silicone implants. However, at that time, less than 300 cases of BIA-ALCL had been reported in the literature, and the disease was also associated with a low mortality rate, which contributed to the narrative that implants were relatively safe. [1][2][3] The intracapsular liquid collection presence without inflammatory signs is the primary imaging finding described in the literature for the BIA-ALCL diagnosis. However, the intracapsular collection finding is not specific for BIA-ALCL and can be found in other pathologies such as inflammatory processes (such as autoimmune diseases) and infectious processes. When the collection development is more than one year after surgery, we can characterize it as a late seroma. In this context, warm seromas could be designated when clinical inflammatory complaints are associated, and cold old seromas when the main associated finding present is breast augmentation without inflammatory signs. 4,5 During the same period, we started a prospective study that aimed to investigate complications related to breast implants. This research was approved by the institutional ethics committee and registered at Plataforma Brasil. In addition to noting that many patients with breast implants showed common breast magnetic resonance imaging (BMRI) abnormalities, we observed that many patients had similar clinical complaints, such as joint pain, itching, skin rash, and hardening of the breasts. On the basis of these findings, we described a disease entity called silicone-induced granuloma of the breast implant capsule (SIGBIC). 4 For SIGBIC diagnosis, we adopted three diagnostic criteria: a mass with a high signal in T2 weighted BMRI sequence, late-contrast enhancement, and black-drop signal. We observed that these imaging findings appeared irrespective of the integrity of the implant surface. 5 We explained that this disease was triggered by an immune response to silicone bleeding from the prosthesis within the external environment. 6 We also described the disease's pathophysiology and related the most indolent cases to the presence of extracellular silicone (giant cells) and the most aggressive cases to intracellular silicone (foamy histiocytes). The latter presentation was always associated with the presence of intracapsular fluid collection (seroma). 7 Even without signs of macroscopic rupture and regardless of their surface texture (smooth or textured), all breast implants deteriorate over time. Silicone bleeding is the result of surface deterioration irrespective of macroscopic surface integrity 8 ( Table 1). The treatment of SIGBIC consists of en bloc capsulectomy and breast implant explantation. Given the history of immunological reactions, replacement with a new implant is usually not possible. 6 However, the breast tissue injuries caused by this inflammatory reaction are currently not sufficiently understood. About two years ago, a vast network of women with breast implants shared their personal experiences regarding breast implants on social media (Facebook, Instagram) and reported common clinical symptoms. They called the disease breast implant illness (BII). Although BII was initially treated as a myth by medical academia, 8,9,10 the Food and Drug Administration (FDA) has subsequently recognized BII and notified silicone manufacturers that they should alert patients to possible complications. The final guidance, "Breast Implants -Certain Labeling Recommendation to Improve Patient Communication," which is available on the FDA homepage, provides recommendations concerning the content and format for certain labeling information for saline and silicone gel-filled breast implants, including a "black box warning." According to the FDA, textured implants are more likely to have complications. 11 Nevertheless, information regarding the oncogenic potential of these implants remains scarce, and the literature contains only a few case reports outlining the association between breast implants and carcinoma. [12][13][14] Moreover, the existing evidence for the association between breast implants and epithelial carcinoma is contradictory. 14,15 To address these issues, the present manuscript based on our experience and research aims to hypothesize a possible oncogenesis pathway underlying tumor development in patients with breast implants. Index Case Report This illustrative case report describes the findings of a patient who underwent an aesthetic procedure for breast augmentation with texturized retroglandular silicone implants two years ago. Informed consent and consent to publish were obtained from the patient. The patient was part of a study protocol approved by the institutional ethical committee (Instituto Brasileiro de Controle do Câncer). Study protocol: Plataforma Brasil CAAE: 77,215,317.0.0000.0072. Six months following the surgery, she presented with a palpable mass in the breast, which was diagnosed as a complex cyst on ultrasound and was aspirated and drained. The cytological diagnosis showed no signs of malignancy. Soon after the procedure, the cyst grew in size again, and one year after the surgery, the patient underwent her first BMRI scan. The scan revealed signs of SIGBIC associated with a complex pericapsular cystic formation, communicating with the intracapsular space (Figure 1). Ultrasonography revealed fibrous septa in the topography of the fibrous capsule dehiscence and echotextural changes of the silicone implant, indicating a chemical reaction in the contents of the implant associated to a complex well-defined solidcystic mass with hypervascularized vegetation in breast parenchyma ( Figure 2). Complimentary PET-CT ( Figure 3) confirmed the findings. Percutaneous biopsy and drainage of the cyst were performed, and the histological diagnosis indicated an undifferentiated epithelial neoplasia. The patient underwent an excisional surgical biopsy to facilitate therapeutic management, which was performed along with tumorectomy and partial resection of the fibrous capsule ( Figure 4). The histological diagnosis was high-grade undifferentiated carcinoma with compromised surgical margins ( Figure 5). Treatment with adjuvant chemotherapy and mastectomy was scheduled. Following surgical biopsy and before adjuvant chemotherapy, control BMRI performed for documentation purposes revealed no tumor remnants or lymph node enlargement, two weeks after the surgery. The only change in BMRI, when compared with the first examination, was the silicone implant rotation ( Figure 6). After three months of adjuvant chemotherapy, the patient again underwent BMRI for preoperative staging. BMRI revealed an intracapsular mass formation in the fibrous capsule in the projection of the breast implant's deteriorated area, along with an associated intracapsular collection. The findings were compatible with those of SIGBIC, inferring silicone gel bleeding ( Figure 7). Two weeks later, the macroscopic study of the fibrous capsule after mastectomy confirmed the signs of silicone leakage into the intracapsular space owing to the presence of fibrous septa and thick collection ( Figure 8). Histological assessment of the mastectomy product revealed no tumor remnants. The fibrous capsule showed infiltrative areas of the lymphocyte matrix without atypia and with intermingling foamy histiocytes, confirming the diagnosis of SIGBIC. No undifferentiated cells were found ( Figure 9). The breast implant showed a microscopic fissure with an internal content exposure area ( Figure 10). Background Metaplasia Metaplasia is essentially an adaptation to injury and a precursor to dysplasia and cancer. This process involves the replacement of a mature cell with another, caused by an injury. It can be induced or accelerated by different abnormal stimuli to the tissue Figure 11). Malignant transformation of healthy cells occurs through different phases. The first phase is adaptive, in which natural cells are stressed by the new agent. This is DovePress followed by the oncogenic phase, which involves a transformation from metaplasia to dysplasia. 16 One type of metaplasia that can be found in the breast is acinar-ductal metaplasia (ADM). 18 ADM is also seen in the pancreas and may be related to an acute or chronic inflammatory process. In mice, ADM is a well-defined precursor of pancreatic intraepithelial neoplasia, which may eventually progress to pancreatic ductal adenocarcinoma. Pancreatic metaplasia can be reversed by removing inflammatory stimuli. 19,20 Metaplasia, caused by non-epithelial factors, may be related to immune cells, inflammatory cells, fibroblasts, and the stimulation of external stress agents. These agents force native cells to lose their specific identity and lineage. Pro-inflammatory and immunological cues influence epithelial cell signaling and the induction of metaplasia. Immune cells play an essential role in this process. For example, the inflammatory cytokines secreted by macrophages play important roles in pancreatic ADM, with contributions from macrophage-released matrix metalloproteinases. Similarly, interleukin-3 has been recently shown to change the populations of inflammatory macrophages; from a subpopulation of inflammatory macrophages to a subpopulation of macrophages alternatively activated in ADM. 16 In the present case, the breast tissue was permanently exposed to silicone bleeding. The implant was a continuous and persistent generator of the antigen that could have caused abnormal stimuli in the tumor microenvironment (TME). T-Cell Dysfunction in Cancer Immunity Overexposure to an antigen can result in T-cell dysfunction. In addition, chronic stimulation by the antigen results in persistent expression of programmed cell death protein by cytoplasmic NFAT 1. 21,23,24 T-cell dysfunction can be represented by T-cell exhaustion (Tex), anergic T cells, and senescent T cells. Senescent T cells represent the final state of differentiation due to a repeated stimulus, which involves an irreversible cell cycle and shortening of the telomere. 18,22 Fluid analysis of the index case showed foamy histiocytes that could have elicited T-cell dysfunction. Inhibitory Cells in the TME A variety of cell types participate in the stimulation or inhibition of tumor growth in the TME. Immunosuppressive cells, such as regulatory T cells (Tregs), tumor-associated macrophages (TAMs), myeloid-derived suppressor cells (MDSCs), cancer-associated fibroblasts and adipocytes, and endothelial cells, are present in the TME and contribute to the dysfunction. 24 Treg cells, the largest population of CD4 + T cells infiltrating the TME, can inhibit T-cell-mediated antitumor immunity. Treg cells generally interrupt T-cell activation, proliferation, and survival by producing immunosuppressive molecules, including transforming growth factor-beta and interleukin-10 (IL-10). However, therapeutic approaches targeting the antibodies in Treg cells can deplete Treg cells, reverse T-cell dysfunction, and restore T-cell antitumor immunity and immune surveillance for cancer cells. [24][25][26] TAMs suppress T-cell antitumor immunity and promote tumor development, involving functions such as sustainable accumulation of Treg cells, deregulation of vascularization by expression of chemokines and amino acid-degrading enzymes, and promotion of T-cell dysfunction. Likewise, MDSCs enter the TME in an aberrant manner, produce nitric oxide and reactive oxygen species, and express arginase 1 and IDO, promoting T-cell dysfunction. 24,27 The case described in the present study showed the presence of inflammatory cells such as foamy histiocytes and lymphocytes, which were associated with undifferentiated epithelial cells and silicone corpuscles. Morphogenesis and Bauplan Morphogenesis is a process of "forward" genomic folding that increases the complexity of processing topobiological information, with plasticity loss occurring at its highest levels. 28,29 The term bauplan was introduced to characterize the morphological features of different species. In the context of tissues, the bauplan in the TME allows cells to perceive topobiological information presented in the vicinity of a morphogenetic field by signaling (ie, epitopes, chemical gradients) and physical phenomena (ie, pressure, tension, bioelectric events) to respond to activities programmed as proliferation, migration, aggregation, differentiation, quiescence, apoptosis, etc. Each bauplan shows two characteristics: 1) complexity of the processed topobiological information and 2) plasticity, allowing the cells to adapt their bauplan to the topological information provided by the morphogenetic fields of the adjacent-cell populations. 28,30 The mature cells at the top of these layers' hierarchies have minimal to no plasticity but are highly specialized. This last bauplan needs to adjust the continuous loss of cells and work in conjunction with innate and adaptive immune systems to preserve the tissue's integrity. 28,31 Hockel and Ben 28 also postulated that within relevant clinical conditions, a malignant tumor is initiated by (epi-) genetic alterations, in which cell proliferation is increased, and the final layer of the bauplan is disturbed, allowing cell divisions in domains that are normally restricted from differentiation. For epithelial cancer, these manifestations are microscopically observed in dysplasia or carcinoma in situ. Genetic changes determining the function of oncogenes and resulting in the loss of tumor suppressor genes are known as driver mutations. This process of tumor development is called reverse morphogenesis. 28,32 In the case described here, the exposition of the intracapsular inflammatory content to the pericapsular space could have caused injury to the specialized epithelial cells and triggered a local inflammatory response and metaplasia. Invasive Carcinoma of the Non-Special Type with Medullar Features Invasive carcinomas of the non-special type with medullary features, categorized as medullary carcinomas (MCs), represent less than 5% of breast carcinomas. The diagnosis of MC is specific, with five diagnostic criteria being adopted: complete circumscribed lesion, growth of syncytial pattern in at least 75% of the tumor, intermediate/high nuclear grade, diffuse lymphocytic infiltrate, and absence of intraductal component or glandular differentiation. Despite the poor histological differentiation and basallike phenotype, this tumor has a favorable prognosis. 33 In histological assessments, in addition to the five diagnostic criteria, MCs show an increase in activated cytotoxic lymphocytes, with a predominance of T cells. Studies have shown that these characteristics reflect an active response of the organism to the tumor, which leads to a good prognosis. Mitoses are numerous, and atypical giant cells may be present. 33,34 Theory Our proposed theory is based on the immune response to the bleeding gel of the silicone breast implant serving as a triggering point for tumor oncogenesis in breast tissue. We propose that the inflammatory process in response to silicone particles in the pericapsular glandular tissue favors the development of cellular mutations in specialized epithelial cells. This reverse morphogenesis would result in MC according to the following steps ( Figure 12): Step 1. Breast implant placement with the formation of the fibrous capsule by the host. Step 2. Deterioration and permeability loss of the implant surface, with exposure and leakage of the silicone into the intracapsular space. Step 3. Fibrous capsule inflammatory response to polydimethylsiloxane (PDMS4), resulting in intracellular silicone (foamy histiocytes) triggering dysregulation of the inflammatory process with T-cell recruitment and seroma formation. Step 4. Dehiscence of the fibrous capsule with extrusion of the intracapsular component into the pericapsular space. Step 5. Exposure of epithelial cells to the inflammatory process originating from chronic gel bleeding, with predominance of monoclonal T cells and foamy histiocytes. Step 6. Changes in the microenvironment of these epithelial cells in the fibrous capsule dehiscence area. Immunosuppression of this microenvironment occurs due to the modulated inflammatory response mediated by T cells. Step 7. Dysregulated cell mutation and proliferation of epithelial cells in the presence of mutations. Discussion Despite the fact that some studies have described the association between breast carcinoma and silicone implants, the pathophysiology of cancer development and the role of silicone particles in the pathway remain unclear. Cases of BIA-ALCL, BII, and SIGBIC have been reported in the literature. In our prior work, we previously demonstrated the immune response to silicone particle exposure in patients using BMRI scans and the associated histological findings. [4][5][6] We have also previously demonstrated that all silicone implants, both smooth and textured, deteriorate over time. 8 The main result of this deterioration is the gel bleeding or shedding. 6 The proposed theory describes the possibility of pericapsular breast carcinomas development in patients with breast implants. To date, the causes of silicone breast implant-related complications remain controversial. There seems to be no consensus in the literature regarding implant safety or possible triggered complications. [1][2][3] In 2019, Coroneis et al published an article reporting post-approval FDA studies that assessed long-term implant outcomes in almost 100,000 patients. 14 Curiously, the article did not point to gel bleeding as a possible cause of implant-related harm and recommended postoperative follow-up of implants by ultrasound screening. Another study from the same year showed an improvement in clinical symptoms and related complications from gel bleeding in patients with BII who underwent an explant procedure. 35 First, we questioned whether the product of the inflammatory response to the silicone corpuscles could be carcinogenic when in contact with the glandular tissue. Could this content's physical aggression to epithelial cells, in association with the immunosuppression in the TME by T cells and TAMs, be sufficient to determine the carcinogenesis of the cells? Would the carcinogenesis be a cause or a consequence in such cases? Conceptually, the good prognosis of MC is attributable to the presence of T cells, which ensures a good immune response to the tumor and facilitates its treatment. In contrast, a lack of locally aggressive behavior along with well-delimited characteristics of these highly undifferentiated tumors are extremely interesting behaviors. On the basis of these findings, the reverse can be speculated-the reverse morphogenesis of the epithelial cells may be due to the aggression of the differentiated cells in the last bauplan layer, favored by the immunosuppressive effects of T cells, TAMs, and their products. Therefore, carcinoma may be a consequence of the inflammatory process rather than the inflammatory process resulting from the carcinoma. After the excisional biopsy, no tumor remnants were observed by BMRI without neoadjuvant treatment, despite the margins compromised by the surgical biopsy ( Figure 4). The only evolutive imaging change observed was the implant's rotation compared to the previous study. The shell's discontinuity area was facing the contralateral side of the fibrous capsule. These follow-up BMRI findings for undifferentiated tumors are not trivial in clinical practice. In the following pre-surgical study, intracapsular SIGBIC was observed in the shell's discontinuity projection, confirmed by histology. The proposed pathological pathway is not exclusive to cancer development. It is worth mentioning that the pathology is usually multifactorial. Thus, the response to the same pathogen can vary among individuals. In this context, in a recent study, Onneking et al described the possible cellular toxicity exerted by silicone. 36 According to the authors, silicone implants are composed of crosslinked alloys of polydimethylsiloxane chains in threedimensional networks. Since the chains are cross-linked, lower molecular weight silicone particles with linear or cyclic structures, including D4, D5, and D6, filled the empty spaces. In this study, the authors tested the toxicity of free silicone particles in vitro on Jurkat cells (human T lymphoblast non-adherent lineage). The silicone used for the test was D4 (octadimethylcyclotetrasiloxane) since it has the lowest molecular weight and is the most susceptible to leakage. D4 is hydrophobic, poorly soluble in water, and lipophilic, allowing the particle to pass through the Jurkat membrane. The study also showed that the toxicity of silicone varied depending on the cell type. D4 determined the cell death of Jurkat cells. In HeLa cells (cervical carcinoma epithelial cells), toxicity was less efficient, whereas in HEp-2 cells (human epithelial cells), cell death was not observed. 36 The results presented in the article suggest that in addition to the mechanism proposed by our theory, the direct toxicity of silicone in T cells and eventually on epithelial cells in the TME could catalyze the process of oncogenesis. A recently published article aimed to determine the association between breast cancer recurrence in patients undergoing mastectomy. The article concluded that textured implants have higher recurrence rates of cancer than smooth implants. However, the authors assumed that the recurrence mechanism could not be clarified by the data collected. The recurrence rate described in that study is the same as that reported in the literature after mastectomy, varying between 5% and 10%. A bias of the manuscript was the high prevalence of patients with textured implants compared to smooth implants. 14 A similar debate was observed in relation to tumor recurrence in patients undergoing autologous fat transfer (AFT). For both examples, we believe that tumor recurrence was more related to the type of oncoplastic surgery than to oncogenic factors, such as the implant surface type and AFT. 37 In early 2020, La Forgia et al described the earlier BIA-ALCL imaging findings, including periprosthetic involvement. [38][39][40] The manuscript findings are similar to those found in our SIGBIC articles. We have described two types of silicone in the fibrous capsule: the extracellular silicone (giant cell), a capsule "scar" result of the inflammation process. Furthermore, the intracellular silicone (foamy histiocyte) when we have a macrophagicmediated T cell activation often associated with an intracapsular fluid collection and fibrin. We divided SIGBIC into three categories: 1) intracapsular SIGBIC, 2) SIGBIC with extracapsular extension, and 3) mixed SIGBIC associated with seroma. 7 The referenced articles described similar findings that corroborate that SIGBIC and BIA-ALCL could have the same trigger-point: silicone bleeding. The articles also support our theory, where we hypothesized that the D4 could interfere with the cell morphogenesis process in periprosthetic epithelial cells. In our prospective study on breast implants, we found carcinoma in 5% of patients (to be published). However, it was not possible to assess the surface integrity of the implant surface in these cases. For this reason, we chose to use in this hypothesis and theory article a case with extensive documentation by imaging methods, histology, clinical manifestations, and microscopic study of the implant surface to support our findings ( Figure 13). Confirmation of our hypothesis would possibly indicate the presence of many other silicone-induced breast tumors in patients with silicone implants, whose etiology has remained undiagnosed due to the lack of scientific evidence in the literature. In this regard, a better understanding of the underlying oncogenic processes will facilitate the development of an appropriate therapeutic proposal to improve efficiency and reduce morbidity in patient care. Future Perspectives We hope that this theory will lead to further studies correlating breast implants with breast carcinomas. For this, it would be necessary to identify SIGBIC findings in Figure 13 (A and B) Another example of undifferentiated breast carcinoma (blue star). There is a communication between the intracapsular space to the breast tumor. A discontinuity of the fibrous capsule is observed (green arrow). There is also the black-drop signal in fibrous capsule (yellow circle) (A). The red arrow show intracapsular fluid collection with septa inferring fibrin content (B). preoperative examinations to investigate the presence of foamy histiocytes on microscopy by pathologists and the microscopic evaluation of the integrity of the silicone implant surface in patients with suspicious lesions in the pericapsular implant space. Our theory proposed that free silicone could elicit epithelial metaplasia/dysplasia, which could vary in the host response to an external antigen, from more differentiated to undifferentiated tumors. Since this tumor is probably antigen dependent, its treatment can be personalized. Considering these findings, the safety of breast implants must be questioned, given the complexity of the triggered immune response's complications.
2021-01-22T05:11:44.748Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b93613cbbb97003ef14c3e11e4ca591492c1e0c2", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=65751", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b93613cbbb97003ef14c3e11e4ca591492c1e0c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25387557
pes2o/s2orc
v3-fos-license
Assessment of bariatric surgery results The objective was to evaluate the results of bariatric surgery in patients in the late postoperative period using the Bariatric Analysis and Reporting Outcome System (BAROS). This cross-sectional study was conducted from November 2011 to June 2012 at a hospital in the state of Ceará, Brazil. Data were collected from 92 patients using the BAROS protocol, which analyzes weight loss, improved comorbidities, complications, reoperations and Quality of Life (QoL). Data were analysed using the chi-squared test, Fischer’s exact test and the Mann-Whitney test. There was a reduction in the Body Mass Index (47.2 ± 6.8 kg/m in the pre-operatory and 31.3 ± 5.0 kg/m after surgery, p< 0.001). The comorbidity with the highest resolution was arterial hypertension (p<0.001), and QV improved in 94.6% of patients. The main complications were hair loss, incisional hernia and cholelithiasis. The surgery provided satisfactory weight loss and improvements in the comorbidities associated to a better QL. Use of the BAROS protocol allows nurses to plan interventions and maintain the good results. INTRODUCTION Currently, bariatric surgery is the best treatment for morbid obesity.It complements the practice of other therapies to control weight and the comorbidities associated to excessive adiposity.In addition to providing long-term sustainable weight loss, this surgical procedure also improves metabolism, which helps to resolve various diseases and promotes biopsychosocial well-being (1)(2).This treatment should be indicated for individuals who have a Body Mass Index (BMI) ≥ 40 kg/m 2 or ≥ 35 kg/m 2 with some comorbidity, and who are motivated and aware of the life-style changes required after surgery (2)(3) .The good results obtained during the fi rst years must be seen by these patients as the stimulus needed to change their living habits.Thus, the initial incentives motivated by weight loss should subsequently focus on practicing physical activity, healthy eating and postoperative follow-up to ensure the persistence of favourable results (4) . To assess the success of treatment, periodic follow-up is required after the surgical intervention.This follow-up should include the analysis of weight loss, changes in comorbidities and quality of life, and the occurrence of complications and reoperations (5) .Weight loss is considered one of the main parameters to defi ne the success of bariatric surgery, and there is a consensus among researchers that the criterion for this evaluation is an excess weight loss (% EWL) percentage of at least 50% with weight loss maintenance over the years (6)(7) . Oria and Moorehead (1998) developed the Bariatric Analysis and Reporting Outcome System (BAROS).This protocol is internationally recognized for its practicality and effi ciency, and is currently considered the only instrument that allows a complete and objective assessment of the results of bariatric surgery (8)(9) . In recent years, nursing has been extending its practices for this specifi c population.Thus, the use of instruments like the BAROS enables nurses to obtain information on pa-tients´ adaptation during the postoperative follow-up and directs the actions of care. Nursing care at this stage should be geared toward patient recovery and wellness in a short time span.This care should also focus on preventing complications and increasing self-care, which will result in a better postoperative experience with best results in terms of weight loss, comorbidity resolution and quality of life. Thus, in order to reinforce the importance of continued care for patients during the postoperative period, the research question was: what are the results in the late postoperative period of bariatric surgery with the use of BAROS? The growing number of achievements of bariatric surgery (1) strengthens and justifi es the conduct of this study, considering the need for knowledge of the benefi ts achieved with this treatment for controlling obesity and improving the health of patients. In order to reinforce the importance of continuity of care for patients during the postoperative period, the question was: what are the results obtained in the late postoperative period of bariatric surgery with the use of the BAROS?The growing number of achievements of bariatric surgery (1) strengthens and justifi es the conduct of this study, considering the need for knowledge of the benefi ts achieved with this treatment for controlling obesity and improving the health of patients. Consequently, the aim of this study was to assess the results of bariatric surgery in patients in the late post-operative period based on the Bariatric Analysis and Reporting Outcome System (BAROS). METHOD This cross-sectional study was conducted between November 2011 and June 2012 at a benchmark hospital in bariatric surgery for the Unifi ed Health System (SUS) in the state of Ceará, Brazil. The target public was 570 patients of the Obesity Programme of the State of Ceará, who were experiencing post-operative care for bariatric surgery.The late post-operative period starts seven days after surgery and can last weeks or months.This period represents healing time and the prevention of complications. This convenience sample consisted of 92 patients who attended consultations with the multi-disciplinary team during the data collection period.The inclusion criteria was patients aged 18 or over who had been in the post-operative period for at least three months.This period was established to approach patients when they were initiating the practice of physical activities and a special diet.Furthermore, Ordinance 492/SAS of the Ministry of Health, which establishes standards for licensing and authorizing High Complexity Care Units for Patients with Severe Obesity and Guidelines for the Care of Patients with Severe Obesity recommends the use of the BAROS for assessing the success of surgery and considers that the full protocol should be applied from the 3 rd month of the post-operative period (10) . Exclusion criteria were patients with a cognitive limitation that could compromise the response of the data collection instrument and patients that were not registered in the institution's obesity programme.Data were collected by completing an instrument with clinical-epidemiological in-Rev Gaúcha Enferm.2015 mar;36(1):21-7. formation (sex, age, type of surgery, time of post-operative period, weight, height, and pre-and postoperative Body Mass Index (BMI)), a questionnaire on post-operative quality of life, and the BAROS.The domains evaluated in the BAROS are weight loss (percentage of excess weight loss), clinical evaluation (by identifying improvements or the resolution of comorbidities, such as heart disease, SAH, DM, osteoporosis, infertility and sleep apnea) and quality of life evaluation (with the Moorehead-Ardelt Questionnaire II) (5,8) . If patients present any disease in the preoperative period (arterial hypertension, diabetes mellitus II, cardiovascular disease, dyslipidemia, obstructive sleep apnea, osteoarthritis and infertility), aggravations to these comorbidities are evaluated in the post-operative period as follows: aggravated (score -1), unaltered (score 0), improved (score 1), one of the more serious comorbidities was resolved and others improved (score 2) and all main comorbidities were resolved and the others improved (score 3).Patients who did not present preoperative comorbidities are classifi ed as unchanged and receive score zero (5) . To assess the quality of life, the Moorehead-Oria Quality of Life Questionnaire II was used (QoL-II), with the following six variables: 1) self-esteem, 2) physical activity), 3) social relations, 4) job satisfaction, 5) pleasure related to sexuality, and 6) eating behaviour.Each variable is worth 0.5 points, totalling 3 points for the quality of life domain.Once the scores are added, quality of life is classifi ed as very reduced (-3 to -2.1 points), reduced (-2 to -1.1 points), unchanged (-1 to 1 points), improved (1.1 to 2 points) and improved (2.1 to 3 points) (5). Postoperative complications are classifi ed as clinical, surgical, greater, lesser, early or late (5) .Regardless of the number of complications, -0.2 points are deducted for greater complications and -1 point is deducted for greater complications.If the patient has one lesser and one greater complication, -1 point is deducted for these complications (5) .In case of reoperation due to the occurrence of a complication, the score is zero.Patients who do not present complications receive a zero score.If a patient requires a reoperation, 1 point is deducted from the total score (5) . After completing the BAROS data and the questionnaire, each patient receives a score.According to the fi nal score, the surgical evolution of patients is classifi ed as insuffi cient (0 or less points), acceptable (0 to 1.5 points), good (1.6 to 3 points), very good (3 to 4.5 points) and excellent (4.6 to 6 points).For patients with comorbidity, the classification is as follows: insuffi cient (1 or less), acceptable (1.1 to 3 points), good (3.1 to 5 points), very good (5.1 to 7 points) and excellent (7.1 to 9 points) (5) . Data were arranged on tables and graphs with absolute frequencies and percentages, and analysed using the Statistical Package for the Social Sciences (SPSS) version 19.Diff erences between proportions were assessed using the chi-squared test and Fischer´s exact test, and diff erences between continuous variables were assessed using the Mann-Whitney test.The Wilcoxon test for paired samples was used to compare BMI of patients in the postoperative period with the preoperative period and post-surgical period.BMI was classifi ed according to criteria established by the World Health Organization (WHO) (12) .The adopted signifi cance level was 5% and the confi dence interval was 95%. The patients were invited to participate in the study, given all due explanations and asked to sign an informed consent statement.This study was approved by the Human Research Ethics Committee of the institution (CEP538/2011). RESULTS Of the 92 patients who participated in this study, 82.6% (76) were women.In relation to age, most patients, 33.7% (31), were between 29 and 38 years old, with an average age of 40.53 ± 10.03 and an age range of 22 to 70 years.A total of 43.4% (40) had been in the postoperative period for 7 to 24 months, which represented the period of greater weight loss. In relation to the surgical technique, 53.3% (49) of patients had been operated via videolaparoscopy, while 46.7% (43) underwent open or conventional surgery.In the institutions were the study was conducted, the adopted surgical technique was Fobi-Capella, which was via laparoscopy or open until 2010, and is currently only via laparoscopy.Thus, the predominance of videolaparoscopy is justifi ed in the results of this study. Table 1 shows the BMI classifi cation of patients of the Obesity Programme of the State of Ceará. BMI was 35.1 kg/m 2 , maximum BMI was 74.2 kg/m 2 and average BMI was 47.2 ± 6.8 kg/m 2 .In the postoperative period, there was a change in this profi le, given that 37% (34) of patients were overweight, 35.9% (33) were obese and 5.4% (5) had a normal BMI. Class II obesity in 15.2% ( 14) of patients and class III obesity in 6.5% (6) of patients in the postoperative period is related to the fact that severely obese individuals with a BMI above 55 kg/m 2 managed to lose weight and reduce their BMI level to a lesser class of obesity, which is evidently considered an important achievement.In the postoperative period, minimum BMI was 23.8 kg/m 2 , maximum BMI was 49.8 kg/m 2 , and average BMI was 31.3 ± 5.0 kg/m 2 . There was a diff erence of 15.9 kg/m 2 in the BMI of patients between the pre-and postoperative periods (p<0.001). Of the 92 patients, 59.8% (55) presented comorbidities in the pre-surgical period.Of these patients, 40% (22) had more than one comorbidity, which supports the fact that obesity is a risk factor for the occurrence of several associated diseases.The most prevalent of these diseases was SAH, with a prevalence of 50% (46) among the patients (p<0.001).The second most prevalent were DM 2 and dyslipidemia, with a frequency of 13% ( 12) and a value of p=0.001 each. Table 2 shows the characterization of comorbidities of patients in pre-and postoperative period.SAH had a prevalence of 50% (46) in the preoperative period.Of this 50%, 97.8% (45) stopped taking medication in the postoperative period and were only following a diet and exercising.DM 2 also presented good results with surgery, considering that 13% (12) of patients had this disease.In the postoperative period, 83.3% (10) did not use medication and 16.7% (2) controlled the disease with oral hypoglycaemic drugs.In relation to dyslipidemia, 13% (12) of patients presented this comorbidity in the preoperative period, while in the postoperative period, 83.3% (91.7) resolved this comorbidity and 8.3% (1) showed an improvement with medication (table 2). BMI Classifi cation In the postoperative group, 75% (69) considered that their quality of life had considerably improved after surgery, 19.6% (18) felt that their QoL had improved and only 5.4% (5) patients classifi ed their QoL as unchanged.None of the patients classifi ed their QoL as bad or very bad (p<0.001). Based on the BAROS, the scope for complications is classifi ed in three ways: greater complications; lesser complications and lesser with greater complications.Of the participating patients, 67.4% (62) developed some type of complication.However, most of these complications -51.6% (32) -were identifi ed as being lesser.Vomiting in the immediate postoperative period occurred as a lesser, early complication, while anaemia, hair loss and nutritional defi ciency occurred as lesser late complications.Greater complications presented a frequency of 19.4% (12).An early greater complication was a case of suture dehiscence, while lesser late complications were cholelithiasis and incisional hernia.The occurrence of a lesser complication with a greater complication was identifi ed in 29% (18) of patients. Analysis also showed that 32.6% (30) of patients required reoperation.These reoperations were necessary due to the presence of complications such as hernia, cholelithiasis and oesophageal stricture.Table 3 shows the association of the two main causes of reoperations and the adopted surgical technique. The need for reoperation was statistically signifi cant for incisional hernia associated to the open surgical technique (p<0.001). Based on the BAROS, patients showed good results in relation to bariatric surgery.In relation to fi nal scores, 44 (47.8%) classifi ed the success of surgery as excellent, 39 (42.4%) as very good, eight (8.7%) as good, and one (1.1%) as acceptable. DISCUSSION In Brazil, the most widely used bariatric surgery technique is laparoscopic Roux-en-Y Gastric Bypass (LRYGB) known as Fobi-Capella because it favours 40% long-term weight loss in relation to the initial weight.It also reduces the occurrence of important nutritional and metabolic alterations, allowing patients to improve their quality of life both physically and emotionally (13) . Among the patients that underwent bariatric surgery, there was a diff erence of 15.9 kg/m 2 between the pre-and post-surgical BMI, which indicates a satisfactory improvement to obesity.These results corroborate the fi ndings of another study that verifi ed a signifi cant reduction in BMI after bariatric surgery, with an average of 49.56 kg/m 2 before surgery and 28.3 kg/m 2 (6) in the postoperative period.Depending on the type of surgery, weight loss tends to be more intense during the fi rst six months and stabilizes after two years, with chances of weight gain after reaching this plateau (14) .This stresses the role of nurses in the assessment of patient evolution and the provision of health education to help patients reach the weight loss goals. Complications A study that assessed QoL of patients before and after bariatric surgery in the Brazilian public health system showed that after surgery, 82.2% of patients considered their quality of life as being good or very good, which contrasted the 40% of patients who expressed the same opinion during the preoperative period (15) .These fi ndings confi rm that an improved quality of life is fundamental for the success of bariatric surgery, which often transforms the lives of obese individuals (16) . In the pre-and postoperative period, nurses can assess the patient's QoL and use this information to compare changes before and after surgery.The use of questionnaires provides precise information on how patients analyze their biopsychosocial well-being.For this reason, nursing professionals should familiarize themselves with the wide range of available instruments that assess QoL and implement these instruments in their practice.This attitude would ensure that interventions are more focused on patient needs and will subsequently improve the quality of care. Early identifi cation and treatment of possible complications are essential for the obtainment of good results, and the assessment of these results is important during patient follow-up by the multi-professional team (17) .To reduce immediate and late postoperative complications, patients should be instructed on all the aspects of care during this period, which include nutrition, physical activity, hygiene and surgical risks (18)(19) . Preparing patients positively infl uences their adaptation to postoperative conduct, considering that patients will be more aware of entire process of the perioperative period.This knowledge helps patients clarify doubts and queries on the potential of weight loss, diet phases, the benefi ts of physical activity, possible complications and the possibility of regaining weight. In Spain, a study evaluated 162 patients before surgery and two days after surgery.Of the 162 patients, 94.7% had a fi nal score of good and excellent in the BAROS (20) .In Brazil, the BAROS was used in some studies to verify the success of bariatric surgery and the quality of life of patients after surgery. In São Paulo, a study assessed the results of surgery and the relationship of these results with quality of life, weight loss and comorbidity resolution over several postoperative periods.The study showed that the qualitative results of the BAROS were very good or excellent in 90% of all the evaluated periods (4) . Another study in São Paulo that assessed the quality of life of patients who underwent bariatric surgery found that 93% of patients scored good, very good and excellent (15) . In this study, the results ranged from good to excellent, corroborating with the fi ndings in literature.These results suggest that the use of instruments such as the BAROS by nurses in the care process favours the planning of nursing actions when providing patient care (7) . This emphasizes the importance of nursing care, especially during the postoperative period, since it is the fi rst moment of the patient's adaptation to a new lifestyle.Nurses must therefore extend their participation in care and guidance on the changes in lifestyle, as this participation is essential for the success of surgery and the well-being of patients. CONCLUSION The application of the BAROS on patients who underwent bariatric surgery to determine the success of this procedure showed that bariatric surgery provides satisfactory weight loss, reduces BMI and enables the resolution and/ or improvement of associated comorbidities, which has a positive impact on the quality of life of patients.However, despite the success and effi ciency of this treatment, the participants of this study did present some complications.It is therefore important for healthcare professionals to know the possible complications and their signs and symptoms. Consequently, this study contributes to the science of nursing by shedding new light on the fundamental importance of the nurses´ role in the multidisciplinary team for the provision of quality care for patients and their families during the entire perioperative period. Limitations of this study are associated to insuffi cient time to monitor patients for a specifi c period in a longitudinal study, from the moment they entered the institution to at least six months into the postoperative period, which would have provided a more reliable analysis. Table 1 . Distribution of BMI classifi cation of patients who underwent bariatric surgery of the Obesity Programme of the State of Ceará (n=92), Fortaleza-CE, Brazil, 2012. Source: Research data.* Wilcoxon test for paired samples (pre-surgical and post-surgical). Table 2 . Characterization of the clinical conditions of patients in the Obesity Programme of the State of Ceará who underwent bariatric surgery (n = 92).Fortaleza-CE, Brazil, 2012. Table 3 . Distribution of the occurrence of complications that led to reoperation according to technique of patients in the postoperative period of the Obesity Programme of the State of Ceará (n = 92), Fortaleza-CE, Brazil, 2012.
2019-01-03T06:46:21.901Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "521a2c551cabd4b5ca4c2b429bd4e0f4b2468a79", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/rgenf/v36n1/1983-1447-rgenf-36-01-00021.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "521a2c551cabd4b5ca4c2b429bd4e0f4b2468a79", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211006086
pes2o/s2orc
v3-fos-license
Utility of wound cultures in the management of open globe injuries: a 5-year retrospective review Background Endophthalmitis after open globe injury can be devastating to vision recovery. As treatment of endophthalmitis is often empiric, some surgeons may obtain cultures at presentation of trauma in anticipation of later infection. This study examines the usefulness of wound cultures obtained during globe repair. Results Institutional Review Board approval was obtained. Medical records were retrospectively reviewed, with 168 open globes included. Cultures of the wound site had been taken in all cases included in this study. Wound cultures were positive in 63% of cases but were not used for clinical decision-making for any patient in this study. Two patients had evidence of endophthalmitis at presentation, with results of vitreous culture matching those from the wound. No patient later developed endophthalmitis after open globe repair. Conclusions Despite a high rate of wound contamination, few cases of endophthalmitis (1.2%) were seen in this series. In no case did the results of wound culture impact choice of antibiotic prophylaxis or treatment. Cultures obtained at the time of open globe repair were not cost effective in the subsequent management of the injury. Background Open globe injury results from penetration or perforation in sharp or projectile trauma or from rupture in blunt trauma [1]. Endophthalmitis is a potentially devastating sequela of open globe injury. Rates of endophthalmitis after severe ocular trauma range from 0 to 17% [2,3], with a higher risk in the presence of an intraocular foreign body (IOFB) [4,5]. Outcomes are typically worse than post-operative endophthalmitis [2]. Prophylactic antibiotics may be administered, but data supporting a particular treatment protocol are scarce [6]. If endophthalmitis develops, treatment is often initiated empirically [7]. Staphylococcus and Streptococcus species are common causative agents [7], as they are skin flora and may enter through an open wound [2]. Some surgeons may obtain samples for culture at the time of globe repair to have a responsible microbial agent identified by the time the clinical picture worsens [8,9]. However, despite positive cultures even from intraocular sources at the time of open globe repair, endophthalmitis may not develop [8,10]. Some surgeons, including those at our institution, routinely obtain wound cultures at the time of surgical repair [7,8]. The purpose of this study was to analyze the clinical usefulness and cost-effectiveness of this test in the care of open globe injury. Methods Institutional review board (IRB) approval was obtained at the University of Mississippi. This study was a retrospective non-comparative case series patient record review from June 2012 to April 2016. Electronic medical records were searched for a diagnosis code of open globe or corneal-scleral laceration and reviewed for correct coding. Patients who underwent primary globe repair, had wound cultures taken preoperatively, and followed up for at least 1 month post-operatively were included. Patients were excluded if the eye was primarily removed rather than repaired, if cultures were not obtained, or if they were lost to follow-up prior to 1 month. After induction of general anesthesia and endotracheal intubation, cultures were obtained from the wound site using cotton tip applicators. See Table 1 for listing of cultures obtained as a departmental standard in every case. The globe was then prepped with 5% ophthalmic betadine and draped for ophthalmic surgery. Globe repair was performed by multiple faculty at a single institution with a variety of approaches corresponding to the nature of the injury. Ocular and systemic antibiotics were administered at the discretion of the surgeon. If given, ocular antibiotics were administered after cultures were obtained. Patients were started on topical medication post-operatively (prednisolone acetate 1%, moxifloxacin 0.5%, atropine 1%) for at least the following week and were followed in clinic. Patient presentation Two hundred and twenty-nine eyes were recorded with a diagnosis code of open globe injury. Eleven eyes were primarily enucleated or eviscerated, 39 were lost to follow-up prior to 1 month, and 11 did not have cultures obtained. Remaining 168 eyes of 166 patients were included for study, including both eyes in two cases of bilateral ocular trauma. The average patient age at presentation was 38 years (range 1-93). Average length of follow-up was 272 days. There were 42 females (25%). Spring (March-May) was the most common season of presentation, with 53 globes (31.5%); winter (December-February) and summer (June-August) were next most common, with 41 and 43 globes, respectively (24.4 and 25.6%); and fall (September-November) least common with 31 globes (18.5%). An intraocular foreign body was found in 23 eyes (13.7%). Biomaterial was involved in 50 globes (29.8%). Most common mechanisms of injury were metal (26.8%) and wood or plant material (13.1%). Assault and fall were also common causes, each responsible for 11.3% of cases (Table 2). Antibiotic choice Antibiotics were administered in a majority of cases (Table 3). Intraocular antibiotics were administered in 151 eyes (90%). Vancomycin was administered in each of these cases, usually in combination with clindamycin (79%) but with ceftazidime in 11% and by itself in one case. Subconjunctival antibiotics were administered in 88% of cases. Tobramycin was the most common subconjunctival choice; cefazolin, vancomycin, and ceftazidime were also administered. Systemic antibiotics were administered in the emergency department or operating room in 79% of cases. Cefazolin was the most common (76%); and ceftazidime, ceftriaxone, and vancomycin were less frequently administered. Culture results Overall, 106 (63%) of cultures obtained were positive. Of these positive cultures, Staphylococcus species were most commonly seen (91.5%), Streptococcus viridans was next most common (10.4%), and 17.9% were polymicrobial. Only five samples grew fungus ( Table 4). Mechanism of injury involving biomaterial was not significantly associated with culture positivity of trauma without biomaterial (p = 0.15 by Chi square). were not used for the remainder of their care. In no case was topical antibiotic prophylaxis changed based on culture results. Wound culture, in this study, had a 100% sensitivity, a 37.4% specificity, and a positive predictive value of 1.9% for endophthalmitis development, with an accuracy of 38%. Culture costs Cost of culture media from the supplier were calculated for the standard culture analysis performed per patient (Table 1). Per patient, total cost for supplies was $26.27, and hospital charges for cultures were $875. This amounts to $2206.68 of supplies and $73,500 of patient charges per case of endophthalmitis secondary to an open globe. Vision outcomes Of the 168 eyes, 17 did not have a recordable vision secondary to patient cooperation with exam. Of those with recordable vision, mean vision was HM (logMAR 2.1) at presentation and CF (logMAR 1.8) at last follow-up. At final follow-up, vision improvement was seen in 66 eyes (43.7% of recorded), 40 worsened (26.4%), and 45 were stable (29.8%) compared with presentation. The eye was eviscerated or enucleated in 21 cases (12.5%) during the follow-up period. Final vision was correlated with vision at presentation (r = 0.75, p < 0.05). Discussion Swab cultures from the globe wound were obtained preoperatively in all cases included in this study. The culture results obtained were not used clinically, and no patient without evidence of infection at presentation went on to develop endophthalmitis. For the two patients with endophthalmitis at presentation, vitreous cultures were obtained that matched wound cultures. To the best of our knowledge, our study represents the largest study to date of wound cultures in open globe injuries. Wound cultures were positive in 63%, compared with 23% by Rubsamen et al. [8] and 20% of conjunctival wash cultures by Bhala et al. [12]. Rubsamen et al. found a low sensitivity but high specificity of wound cultures obtained intraoperatively [8]. In that study, the rate of traumatic endophthalmitis was 13% and the intraoperative cultures were clinically useful for antibiotic selection [8]. The study by Bhala et al. [12] also reported a high rate of posttraumatic endophthalmitis (40%), and a positive culture obtained at the time of globe repair correlated to risk of infection. No distinction was made, however, between positive intra-or extra-ocular culture result in endophthalmitis risk [12]. Conjunctival and eyelid swab cultures may correlate with aqueous cultures at the time of globe repair, indicating that contamination may be from the skin flora prior to or during surgery [9]. Open globe repair has been reported to cost $850-3000 [13][14][15] in hospital charges worldwide, with higher costs for more complex cases requiring further surgery [14]. Post-operative hospitalization further increases cost, up to an average of $4500 [14]. The societal impact can be devastating, as vision or globe loss can lead to disability and an estimated loss of 25% of earning capacity [16]. Globe loss during war, for example, is estimated to cost $3 million over a lifetime [17]. Endophthalmitis, too, can be very costly to the healthcare system. Post-operative endophthalmitis increases Medicare charges by $3500 [18]. Prophylactic intraocular antibiotics, however, may save the healthcare system $88,000 over 10 years [19]. Intraocular antibiotics has been shown by metaanalysis to reduce the risk of traumatic endophthalmitis [20]. While several centers routinely use prophylactic systemic antibiotics [5,21,22], their use is controversial without strong supporting evidence [7]. Intravenous antibiotics were not advantageous over oral prophylaxis in a randomized controlled trial [6]; however, systemic antibiotics have not been compared with local administration in a similar fashion. Systemic antibiotics may have a poor intraocular penetration, and intravenous therapy increases hospitalization costs [23]. Patients in this study were not randomized, and the majority of patient received ocular and systemic antibiotics per surgeon preference. Although a prospective study would not utilize the insurance system for culture costs, this retrospective study collected data from an established treatment approach in place as a departmental standard. The rate of endophthalmitis seen in this study is on the lower end of the range reported in the literature [2], possibly related to the high use of prophylactic antibiotics. Conclusions In conclusion, cultures obtained at the time of open globe injury repair during this study period were not clinically useful or cost effective in the subsequent management of the injury. With a low rate of endophthalmitis, the positive predictive value of this test was low. We recommend obtaining cultures only if evidence of intraocular infection exists.
2020-02-03T15:48:41.703Z
2020-02-03T00:00:00.000
{ "year": 2020, "sha1": "46bf886f087f1f7298d38963e0566a46b2cfc71e", "oa_license": "CCBY", "oa_url": "https://joii-journal.springeropen.com/track/pdf/10.1186/s12348-020-0196-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46bf886f087f1f7298d38963e0566a46b2cfc71e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5696349
pes2o/s2orc
v3-fos-license
Detecting Spurious Counterexamples Efficiently in Abstract Model Checking Abstraction is one of the most important strategies for dealing with the state space explosion problem in model checking. In the abstract model, the state space is largely reduced, however, a counterexample found in such a model may not be a real counterexample in the concrete model. Accordingly, the abstract model needs to be further refined. How to check whether or not a reported counterexample is spurious is a key problem in the abstraction-refinement loop. In this paper, a formal definition for spurious path is given. Based on it, efficient algorithms for detecting spurious counterexamples are proposed. Introduction Model checking is an important approach for the verification of hardware, software, multi-agent systems, communication protocols, embedded systems and so forth. The term model checking was coined by Clarke and Emerson [1], as well as Sifakis and Queille [2], independently. The earlier model checking algorithms explicitly enumerated the reachable states of the system in order to check the correctness of a given specification. This restricted the capacity of model checkers to systems with a few million states. Since the number of states can grow exponentially in the number of variables, early implementations were only able to handle small designs and did not scale to examples with industrial complexity. To combat this, kinds of methods, such as abstraction, partial order reduction, OBDD, symmetry and bound technique are applied to model checking to reduce the state space for efficient verification. Thanks to these efforts, model checking has been one of the most successful verification approaches which is widely adopted in industrial community. Among the techniques for reducing the state space, abstraction is certainly the most important one. Abstraction technique preserves all the behaviors of the concrete system but may introduce behaviors that are not present originally. Thus, if a property (i.e. a temporal logic formula) is satisfied in the abstract model, it will still be satisfied in the concrete model. However, if a property is unsatisfiable in the abstract model, it may still be satisfied in the concrete model, and none of the behaviors that violate the property in the abstract model can be reproduced in the concrete model. In this case, the counterexample is said to be spurious. Thus, when a spurious counterexample is found, the abstraction should be refined in order to eliminate the spurious behaviors. This process is repeated until either a real counterexample is found or the abstract model satisfies the property. In the abstraction-refinement loop, how to check whether or not a reported counterexample is spurious is a key problem. In [3], algorithm SplitPath is presented for checking whether or not a counterexample is spurious, and a SAT solver is employed to implement it [4,10]. In SplitPath, whether or not a counterexample is spurious can be checked by detecting the first failure state in the counterexample. If a failure state is found, the counterexample is spurious, otherwise, the counterexample is a real one. However, whether or not a state, sayŝ i , is a failure state relies on the prefix of the counterexampleŝ 0 ,ŝ 1 , ...,ŝ i . This brings in a polynomial number of unwinding of the loop in an infinite counterexample [3,15]. In this paper, based on a formal definition of failure states, spurious paths are re-analyzed, and a new approach for checking spurious counterexamples is proposed. Within this approach, whether or not a counterexample is spurious still depends on the existence of failure states in the counterexample. Instead of the prefix, to checking whether or not a stateŝ i is a failure state is only up toŝ i 's pre-and post-states in the counterexample. Based on this, for an infinite counterexample, the polynomial number of unwinding of the loop can be avoided. Further, the algorithm can be easily improved by detecting the heaviest failure state such that a number of model checking iterations can be saved in the whole abstract-refinement loop. In addition, the algorithm can be naturally parallelled. The rest parts of the paper are organized as follows. The next section briefly presents the preliminaries in abstraction-refinement. In section 3, why spurious counterexamples occur is analyzed intuitively and algorithm SplitPath is briefly presented. In section 4, a formal definition of spurious counterexamples is given with respect to the formal definition of failure states. Further, in section 5, efficient algorithms for checking whether or not a counterexample in the abstract model is spurious are presented. Finally, conclusions are drawn in section 6. Abstraction and Refinement There are many techniques for obtaining the abstract models [6,8,12]. We follow the counterexample guided abstraction and refinement method proposed by Clarke, etc, where abstraction is performed by selecting a set of variables which are insensitive to the desired property to be invisible [4]. We use h : S →Ŝ to denote an abstract function, where S is the set of all states in the original model, andŜ the set of all states in the abstract model. For clearance, s, s 1 , s 2 , ... are usually used to denote the states in the original model, andŝ,ŝ 1 ,ŝ 2 , ... indicate the states in the abstract model. Further, for a stateŝ in the abstract model, h − (ŝ) is used to denote the set of origins ofŝ in the original model. The abstraction-refinement loop is depicted in Fig.1 Figure 1: Abstraction refinement loop model M ′ is obtained by the abstract function h. Then a model checker is employed to check whether or not the abstract model satisfies the desired property. If no errors are found, the model is correct. Otherwise, a counterexample is reported and rechecked by a checker which is used to check whether or not a counterexample is spurious. If the counterexample is not spurious, it will be a real counterexample that violates the system; otherwise, the counterexample is spurious, and a refining tool is used to refine the abstract model [3,4,5,7,9,13]. Subsequently, the refined abstract model is checked with the model checker again until either a real counterexample is found or the model is checked to be correct. In this paper, we concentrate on the how to check whether or not a counterexample is spurious. Spurious Paths To check a spurious counterexample efficiently, we first show why spurious paths occur intuitively with an example. Then we briefly present the basic idea of algorithm SplitPath which is used in [3,15] for checking whether or not a counterexample is spurious. Why Spurious Paths? Abstraction technique preserves all the behaviors of the concrete system but may introduce behaviors that are not present originally. Therefore, when implementing the model checker with the abstract model, some reported counterexamples will not be real counterexamples that violate the desired property. This is intuitively illustrated by the traffic lights controller example [3]. terexample,ŝ 1 ,ŝ 2 ,ŝ 2 ,ŝ 2 , ... will be reported. However, in the concrete model, such a behavior cannot be found. So, this is not a real counterexample. Detecting Spurious Counterexample with SplitPath In [3], algorithms SplitPath is presented for checking whether or not a finite counterexample is spurious. In SplitPath, as illustrated in Fig.3, initially, the set, M 0 , of starting states falling into h − (ŝ 0 ), is computed. Then for the image of the states in I ∩ h − (ŝ 0 ), i.e. R(I ∩ h − (ŝ 0 )), the set of states falling into h − (ŝ 1 ), is computed. Generally, for any i ≥ 1, For infinite counterexamples, it is more complicated to be dealt with since the last state in the counterexample can never be reached. Thus, a polynomial number of unwinding of the loop in the counterexample is needed [3]. That is an infinite counterexample can be reduced to a finite counterexample by unwinding the loop for a polynomial number of times. Accordingly, SplitPath can be used again to check whether or not this infinite counterexample is spurious. Failure States and Spurious Counterexamples In [4,5], a spurious counterexample is informally defined by: a counterexample in the abstract model which does not exist in the concrete model. In this section, we give a formal definition for spurious counterexamples based on the the formal definition of failure states. To this end, In 0 s i , In 1 s i , ..., In n s i and Inŝ i are defined first: In iŝ Out iŝ i . Note that for the last stateŝ n in a finite counterexample, where F is the set of states without any successors in the original model. Accordingly, a failure state can be defined as follows. Definition 2. (Spurious Counterexamples) A counterexampleΠ in an abstract modelK is spurious if there exists at least one failure stateŝ i inΠ Example 2. Fig. 5 shows a spurious counterexample where state2 is a failure state. In the set, h − (2) = {7, 8, 9}, of the origins of state2, 9 is a dead state, 7 is a bad state, and 8 is an isolated state. Algorithms for Detecting Spurious Counterexamples Based on the formal definition of spurious counterexample, new algorithms for checking whether or not a counterexample is spurious are presented in this section. Algorithm by Detecting the First Failure State Algorithm CheckSpurious-I takes a counterexample as input and outputs the first failure state in the counterexample. Note that a counterexample may be a finite path < s 0 , s 1 , ..., s n >, n ≥ 0, or an infinite path < s 0 , s 1 , ..., (s i , ..., s j ) ω >, 0 ≤ i ≤ j, with a loop suffix (a suffix produced by a loop). For the finite one, it can be checked directly; while for an infinite one, we need only to check its Complete Finite Prefix (CFP) < s 0 , s 1 , ..., s i , ..., s j > since whether or not a state s i is a failure state only relies on its pre and post states. It is pointed out that in the CFP < s 0 , s 1 , ..., s i , ..., s j > of an infinite counterexample, else return s f =ŝ i ; break; 5: end while 6: if i == n + 1, returnΠ is a real counterexample; Algorithm Analyzing. In algorithm CheckSpurious-I, to check whether or not a stateŝ i is a failure state only relies onŝ i 's pre and post states,ŝ i−1 andŝ i+1 ; while in algorithm SplitPath, to check stateŝ i is up to the prefix,ŝ 0 , ...,ŝ i−1 , ofŝ i . Based on this, to check a periodic infinite counterexample, several repetitions of the periodic parts are needed in SplitPath. In contrast, this can be easily done by checking the complete finite prefix < s 1 , s 2 , ..., s i , ..., s j > in algorithm CheckSpurious-I. Thus, the polynomial number of unwinding of the loop can be avoided. That is for infinite counterexamples, the finite prefix to be checked will be polynomial shorter than the one in algorithm SplitPath. Algorithm by Detecting the Heaviest Failure State In algorithm SplitPath and CheckSpurious-I, always, the first failure state is detected. Then further refinement will be done based on the analysis of this failure state. Possibly, several failure states may occur in one counterexample, so which else return s f =ŝ w[i] ; break; 6: end while 7: if i == n + 1, returnΠ is a real counterexample; one is chosen to be refined is not considered in SplitPath. Obviously, if a failure state shared by more paths is refined, a number of model checking iterations will be saved in the whole abstract-refinement loop. With this consideration, we will check the states which is shared by more paths first. To do so, for an abstract stateŝ as illustrated in Fig.6, EIn(ŝ) and EOut(ŝ) are defined. EIn(ŝ) equals to the number of edges connecting to the states in h − (ŝ) from the states outside of h − (ŝ); and EOut(ŝ) is the number of edges connecting to the states out of h − (ŝ) from the states in h − (ŝ). Accordingly, EIn(ŝ) × EOut(ŝ) is the number of the paths whereŝ occurs. For convenience, we call EIn(ŝ) × EOut(ŝ) the weight of the abstract statê s. Based on this, algorithm CheckSpurious-II is given for detecting the heaviest ŝ EIn EOut Parallel Algorithms Considering whether or not a stateŝ i is a failure state only relies on the preand post-states,ŝ i−1 andŝ i+1 , ofŝ i , the algorithm can be naturally paralleled as presented in algorithm CheckSpurious-III and CheckSpurious-IV. In CheckSpurious-III, anytime, if a failure state is detected by a processor, all the processors will be stop and the failure state is returned. Otherwise, if no failure states are reported, the counterexample is a real one. That is the algorithm always reports the first detected failure state obtained by the processors. Note that a boolean array c[n] is used to indicate whether or not a state in the counterexample is a failure one. Initially, for all 0 ≤ i ≤ n, c[i] is undefined (c[i] = ⊥). c[i] == true means stateŝ i is not a failure state. if Inŝ k ∩ Outŝ k ∅, c[k] = ture; 4: else return s f =ŝ k ; stop all processors; 5: end for 6: if for all 0 ≤ i ≤ n, c[i] == ture, returnΠ is a real counterexample; In CheckSpurious-IV, the weight of the states are considered, and always the heaviest failure state is found. Conclusion Based on a formal definition of spurious paths, a novel approach for detecting spurious counterexamples are presented in this paper. In the new approach, whether or not a stateŝ i is a failure state only relies onŝ i 's pre-and post-states in the counterexample. So, for infinite counterexample, the polynomial number of unwinding of the loop can be avoided. Further, the algorithm can be easily improved by detecting the heaviest failure state such that a number of model checking iterations can be saved in the whole abstract-refinement loop. Also, the algorithm can be naturally parallelled. The presented algorithms are useful in improving the abstract based model checking, especially the counterexample guided abstraction refinement model checking. In the near future, the proposed algorithm will be implemented and integrated into the tool CEGAR. Further, some case studies will be conducted to evaluate the algorithms.
2011-09-26T02:49:05.000Z
2011-09-26T00:00:00.000
{ "year": 2011, "sha1": "2c24b7a75932f8783e3411c6a433f1556c636689", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1109.5506.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "425df7fa00a3bbddccf28ad31133364549a5b3a5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
55580102
pes2o/s2orc
v3-fos-license
In vitro germination and viability of pea pollen grains after application of organic nano-fertilizers The objective of this study was to evaluate the influence of two organic nanofertilizers, Lithovit and Nagro, on in vitro germination, pollen tube elongation and pollen grain viability of Pisum sativum L cv. Pleven 4. The effect of their application was high and exceeded data for the untreated control (44.2 and 47.23 % regarding pollen germination and pollen tube elongation, respectively), as well as the effect of the control organic algal fertilizer Biofa (17.5 and 27.9 %, respectively). Pollen grains were inoculated in four culture media. A medium containing 15% sucrose and 1% agar had the most stimulating impact on pea pollen grains. Pollen viability, evaluated by staining with 1% carmine, was within limits of 74.72-87.97%. The highest viability of pollen grains was demonstrated after the application of Nagro organic nano-fertlizer. INTROdUCTION Knowledge of the viability and capacity of pollen germination, aside from pollen tube growth, is crucial for investigation of the reproductive biology and genetic breeding of some plants, showing the direction and underlying controlled hybridization aimed at creating new hybrids and/or raising pollen viability (Dane et al. 2004;Salles et al., 2006).Low germination percentages and slow elongation of pollen tubes may influence seed formation.Studies of in vitro pollen germination and pollen tube growth are important for understanding fertilization and seed formation in flowering plants and are very useful for explaining any lack of plant fertility (Büyükkartal, 2003).Many factors are able to affect in vitro pollen germination: botanic species, cultivar, plant nutritional state, culture medium, temperature, pollen sampling time, photoperiod, sampling method, application of fertilizers or pesticides to plants, pollen storage conditions, etc. (Stanley & Linskens, 1974;Neves et al., 1997).There are relatively few studies that have determined the effects of organic fertilizers (manures, biofertilizers based on microorganisms or plant extracts, etc.) on flowering characteristics, pollen germination in particular (Hassan et al. 2015).Furthermore, estimates are lacking on the influence of nano-fertilizers on pollen viability and in vitro pollen germination in different plant species.Nano-fertilizers are innovative products with some unique features, such as ultra-high absorption, increased yield, more intensive photosynthesis, but scant literature reports on the subject are available in scientific journals (Sekhon, 2014;Manjunatha et al., 2016). In the laboratory, pollen viability can be determined quantitatively in media that stimulate pollen grain development or by using dyes, such as anilin blue, propionic carmine, acetic carmine, IKI (iodine + potassium iodide), Alexander's stain, etc. (Bolat & Pirlak, 1999;Wang et al. 2004).Dye tests, used as indicators of pollen viability, have the advantage of being a faster and easier method than experiments with in vitro pollen germination.However, different dye types may produce different results.Truly viable pollen can be quantified only by in vitro germination tests because cultivation conditions allow an adequate expression of the physiological capacity of pollen tube formation (Bolat & Pirlak ,1999). The aim of this experiment was to test the effects of two organic nano-fertilizers (Lithovit and Nagro) on in vitro pollen grain germination, pollen tube growth, as well as pollen viability in pea (Pisum sativum L. cv.Pleven 4). In vitro pollen germination For determining in vitro pollen germination, flowers of pea plants in anthesis were collected early in the morning.The pea plants (cv.Pleven 4) were previously treated (twice) at the growth stage 55 (BBCH-scale) with either of the two organic leaf nano-fertilizers: Lithovit (containing CaCO 3 , MgCO 3 , Fe) at a concentration of 0.2% or Nagro (containing the elements N, P, K, Mg, Zn, Fe, Cu, Mo, B, Ca, Se, etc.) at a concentration of 0.05%.The interval between two treatments was 10 days.The fertilizers were applied by a small-volume Matabi hand sprayer, and the solution volume was 20 l da -1 .The control was watered with the same volume of distilled water.The organic leaf fertilizer Biofa (brown algae[Ascorphyllum nodosum] extract, extremely rich in macro-and microelements, alginic acid, natural plant hormones, PGR enzymes, etc.) was used at a concentration of 0.5% as the second control.Biofa had also been examined in our earlier field experiments.It showed a high effectiveness (on yield and nutritional value of forage crops), which was very comparable to the effects of conventional fertilizers. Pollen grains of 10 flowers per variant were collected and then inoculated in Petri dishes (9 cm diameter) containing 40 ml of culture medium, using a brush for homogenous distribution of material.Since different media may affect germination results (Stanley & Linskens, 1974), four culture media were used in the present experiment: medium A -15% sucrose; medium B -15% sucrose, 100 mg/L H 3 BO 3 , 300 mg/L Ca(NO 3 ) 2 ; medium C -medium А with additional 1% agar; medium D -medium B with additional 1% agar.The dishes were subdivided into quadrants, each one representing a replication with approximately 100-150 pollen grains, totaling 12 replications for each culture medium. After inoculation, the dishes were kept at controlled temperature conditions (26 ºC) for 24 hours (Nikolova et al., 2012) before reporting the germinated pollen grains and pollen tube length (stereomicroscope, magnification 10x).After 24 h, tube growth was stopped by adding 10% ethanol (Cresti & Tiezzi, 1992).Six microscopic areas (per quadrant) were counted randomly for evaluation of pollen germination and for measuring pollen tube length in each Petri dish.Pollen grains were assumed to have germinated when the pollen tube length was equal to or longer than the diameter of the pollen grain itself.The length of pollen tube was measured directly with an ocular micrometer fitted to the microscope eyepiece based on the micrometer scale (µm) (Sharafi et al., 2011).The experimental design for pollen germination / pollen tube length was double factorial: 4 organic treatments /Lithovit, Nagro, Biofa, control/ × 4 cultural media /A, B, C, D. The data in regard to pollen germination percentage were previously transformed to arcsin √x(%)/100. Pollen viability Pollen grains from the anthers of pea plants tested in vitro were excised and stained on glass slides, each with a drop of 1% carmine (Coser et al., 2012).They were covered with coverslips, and after a couple of minutes observed under the microscope (10 x lens).To determine the viability of pollen, three anthers per organic treatment variant were analyzed and 100 pollen grains/slide were counted.The percentage of pollen fertility was evaluated based on the proportion of stained pollen grains (viable) against unstained grains (nonviable).Pollen viability percentage was also transformed to arcsin √x(%)/100 prior to statistical analysis. The obtained data were statistically processed using the software Statgraphics Plus for Windows Ver.2.1 at LSD 0.05%. RESULTS ANd dISCUSSION The study revealed a positive effect of the application of organic leaf fertilizers on in vitro pollen germination and pollen tube elongation in pea plants (Table 1).The highest germination percentage, as well as the greatest pollen tube length, regardless of cultural medium, were observed after applying the organic nanofertilizer Nagro (45.04% and 588.58 µm on average, respectively), followed by Lithovit (40.83% and 558.60 µm on average, respectively).These values of the two parameters significantly exceeded those of the organic fertilizer Biofa (by 17.5 and 27.9% on average regarding pollen germination and pollen tube length, respectively) and the control (by 44.2 and 47.23%, respectively).Overall, the higher the germination percentage, the higher is the chance for fertilization (Salles et al., 2006).Consequently, it is important to find adequate organic products which have positive effect on pollen grain germination, which ultimately results in higher plant fertility.Hassan et al. (2015) also reported an increased pollen germination after using different organic fertilizers (poultry manure, sheep manure, a biofertilizer consisting of Azotobacter chrococcum, Bacillus megaterium and Bacillus circulans) and their combinations.According to Bhangoo et al. (1988), such enhancement may be attributed to stimulating effects of the absorbed nutrients on the photosynthetic process, which certainly reflected positively on flowering characteristics, including pollen germination.The high effect of fertilizers is due to the nano dimensions of their particles, as well as the presence of boron in Nagro which is directly influencing the processes of flowering and pollination of forage crops (Pavlov & Kostov, 2001).Positive effects of a variety of nano-materials, mostly metal-based and carbonbased nano-materials, on growth and development of different crop plants have been revealed (Sekhon, 2014).Sheykhbaglou et al. (2010) reported an improvement in agronomic traits (pods, grain yield) of soybean after using nano-iron oxide.However, not one report has been made on changes in pollen germination after using organic nano-fertilizers. Under the conditions that existed in our present experiment, the variation in pollen tube growth in different media (A, B, C, D) was considerable.In the culture medium C, disregarding the оrganic fertilizer application, the mean pollen germination was highest (77.06%), as well as pollen tube length (936.49µm), and it was followed by medium D (45.04% and 592.94 µm, respectively).The lowest values were recorded in culture medium A, containing only sucrose.According to Gwata et al. (2003), differences in in vitro pollen grain germination resulted from a complex interaction between the morphology and physiology of pollen grains and components of the medium.For germination, a medium should contain some nutrients (e.g., calcium, magnesium sulfate, potassium nitrate or boric acid) (Soares et al., 2008).Agar added to such media provides stability, so that the growth of pollen tubes can be observed (Martin, 1972).In general, the medium for in vitro pollen germination varies depending on plant species and cultivar (Dane et al. 2004;Frazon et al., 2005).Still, little information is available on pollen tube growth in pea.Nikolova et al. (2012) found that an optimal medium for P. sativum contained agar, sucrose, H 3 BO 3 and CaCl 2 .Our results are in accordance with previous studies (Warnock & Hagedorn, 1956) reporting that 15% sucrose and 1% agar made the most adequate medium. Pollen staining tests are among the most reliable and most widely used pollen viability tests (Cresti & Tiezzi, 1992).Fast estimation of pollen viability is of great value to plant breeders and geneticists in eliminating the time and space problems (Khosh-Khui et al., 1976).Pollen pea viability (cv.Pleven 4), evaluated by staining with 1% acetic carmine, was within limits of 74.72-87.97%(Figure 1).The highest, and statistically significant viability was demonstrated by pollen grains after the application of Nagro organic nano-fertlizer.Although pollen viability was higher after treatment with Lithovit and Biofa than in the control, the differences were not statistically significant.As a whole, the trends regarding pollen grain viability after different organic fertilizer applications corresponded to the trends regarding in vitro germination percentage and pollen elongation.In conclusion, the experiment revealed a positive effect of the application of organic nano-fertilizers Lithovit and Nagro on in vitro pollen germination and pollen tube elongation in P. sativum.The effect exceeded that of the organic algal fertilizer Biofa (by 17.5 and 27.9%, in regard to pollen germination and pollen tube length, respectively) and the control (by 44.2 and 47.23%, respectively).The obtained results enrich the current information about the activity of nano-fertilizers.In the future, more detailed research is needed to clarify the mechanism of action and the consequences of using nano-fertilizers. Figure 1 . Figure 1.The influence of organic nano-fertilizers on pollen viability of Pisum sativum Table 1 . Influence of organic fertilizer treatments and cultural media on in vitro pollen germination and pollen tube length of Pisum sativum Means marked by the same letter did not differ statistically at 5% probability *medium A -15% sucrose; medium B -15% sucrose, 100 mg/L H 3 BO 3 , 300 mg/L Ca(NO 3 ) 2 ; medium C -medium А with additional 1% agar, medium D -medium B with additional 1% agar
2018-12-08T03:58:46.952Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "75e15df93e67e052931ab39eccad2381b111ab33", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=1820-39491701061G", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "75e15df93e67e052931ab39eccad2381b111ab33", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
245830626
pes2o/s2orc
v3-fos-license
Determination of Company's Value: A Study with Investment Opportunity as a Moderator Variable This research focuses on the state of a company's valuation, which is always changing. The utilized variables to estimate firm value include free cash flow and interest rates, both of which have a positive relationship with company's value. The second purpose is to investigate the current situation of investment opportunities in industrial companies that are similarly highly volatile. The availability of free cash flow indicates the interest rate has a positive relationship with the investment opportunity set. An explanatory research design is used in this study, which aims to examine the correlation between variables. The manufacturing companies that were listed on the Indonesia Stock Exchange between 2013 and 2018 are the focus of this study. Thus, data were collected from 612 units using a purposive sampling technique. The findings reveal that whereas free cash flow has a strong positive indirect effect on company value via mediating the investment opportunity set, interest rates have a negative and minor indirect effect on firm value via mediating the investment opportunity set. II. LITERATURE REVIEW A. Corporate Values: Concepts and Theories The worth of the company (value of the firm) is a condition that a company has reached as an indication of public faith in the company after passing through a series of actions over several years, from the company's inception to the present. Increasing the company's worth is a success that fulfills the wishes of the owners, because as the company's value rises, so does the owners' well-being. Firm value is critical since it reflects a firm's performance and can influence investors' perceptions of that company. A strong company value will lead the market to believe in the business's current situation or future prospects [3]. The price of shares traded on the stock exchange is a measure of company value for companies that issue shares in the capital market. The higher a company's stock price, the higher its value, which has an impact on the owner's financial well-being. The greater a company's stock price, the higher its value, which has an impact on the company's owner's wealth. Tobin'Q can be used to determine the value of a company. The main indicator of a company's worth can be measured from a variety of perspectives. When examined from a certain perspective, this signifies that the company's value is regarded to be good. Free cash flow, interest rate, investment opportunity set, dividend policy, and managerial ownership are all indicators of firm value. Dividend policy and managerial ownership both have a moderating effect. B. Free Cash Flow and Investment Opportunity Set After the company has invested all of its cash in fixed assets, new products, and working capital required for corporate operations, free cash flow is cash flow that is available for distribution to all principals and debt owners. The term "cash flow" refers to the amount of money that is really available for distribution to investors. Increasing a company's cash flow is one approach for management to make it more valuable [4]. Investment alternatives provide potential for expansion, but organizations may not always take use of them. Companies who do not take advantage of investment possibilities incur higher costs than the value of the opportunity lost. Free cash flow has a considerable impact on the number of investment alternatives available [5]. The following hypothesis is presented based on this explanation: H1: Free Cash Flow's Impact on the Investment Opportunity Set C. Interest Rate and Investment Opportunity Set Interest rate changes cause minor changes in investment demand. As interest rates rise, the projected return on an investment falls significantly. However, if variables other than the interest rate change, investment demand is predicted to shift. The relationship between interest rates and investment is negative, indicating that the lower the interest rates, the more investments will be made, and the higher the interest rates, the fewer entrepreneurs will invest [6]. The following are the hypotheses offered in this study: H2: The interest rate has an impact on the selection of investment opportunities. D. Free Cash Flow and Company Value The company's free cash flow offers a lot of advantages when it comes to increasing the company's value. These advantages include boosting the welfare of shareholders and managers through dividend distribution and using firm operational financing as a source of internal capital. A study shows that free cash flow has a significant positive effect on firm value [7]. The results of Muhardi's research are in accordance with the results which show that free cash flow has a negative and significant effect on firm value [9]. However, some studies have shown different results that free cash flow has a significant negative effect on stock prices, because the availability of high free cash flow allows moral hazard to management, so it can reduce firm value [8]. The findings of this study back up the agency theory, particularly the free cash flow theory, which states that larger free cash flow has a negative influence on company value and increases the risk of corporate cash flow abuse. The company's strong free cash flow will imply that it is healthier, since it has free cash on hand, showing that it is in good shape. Free cash flow has a considerable beneficial effect on firm value [10] [11]. The following are the hypotheses offered in this study: H3: Free cash flow has an impact on the value of a company. E. Dividend Policy Plays a Moderate Role in the Effect of Free Cash Flow on Company Value Free cash flow, on the other hand, might create a conflict of interest between shareholders and managers [12] [13]. To increase the link between free cash flow and firm value, dividend policy is required. A signaling function is the role of dividend policy in efforts to reduce conflict between principal and agent. The following hypothesis is formed based on the findings of prior studies in accordance with agency theory: H3.1: The link between Free Cash Flow and Firm Value is strengthened by dividend policy. F. Managerial Ownership Moderates the Effect of Free Cash Flow on Company Value The existence of a conflict of interest between shareholders and managers as well as asymmetric information causes agency costs to be incurred by the company so that in the long term it reduces the company's financial performance. This information inequality causes moral hazard and adverse selection by management. Management assumes that the contract they entered into with the company did not work as expected. This condition creates asymmetric information between managers and shareholders regarding the use of free cash flow so that it has the potential to cause a conflict of interest and have an impact on company performance. Efforts to reduce conflicts between principals and agents are by having a role in the managerial ownership structure as a monitoring function. The role of managerial ownership is expected to increase firm value. H3.2: Managerial Ownership strengthens the relationship between Free Cash Flow with Company Value G. Interest Rate and Firm Value Research shows that high interest rates affect the present value of the company's cash flows, so that existing investment opportunities are no longer attractive [14]. This makes investors no longer interested in investing which results in a decrease in stock prices and a decrease in company value. This study is in line with the interest rate variable that has a negative and significant effect on firm value [15]. Rising interest rates will encourage people to save and be lazy to invest in the real sector. The increase in interest rates will also be borne by investors, namely in the form of an increase in interest costs for the company. People do not want to risk making investments with high costs, as a result investment will not develop. Many companies experience difficulties in maintaining their lives, and this causes the company's performance to decline. The decline in company performance can result in a decrease in stock prices, which means the value of the company will also decrease. Thus interest rates have a significant effect on firm value. [16] H4 : There is an influence of interest rates on firm value H. Dividend Policy plays a role in moderating the effect of interest rates on firm value If interest rates increase, investors are more interested in investing their funds in the banking sector and reduce investors' interest in investing their funds in the capital market [17]. So if the demand for shares decreases, the share price decreases and the company value also decreases, as a signal for investors to invest. Therefore, dividend policy and managerial ownership are needed to strengthen the relationship between interest rates and firm value. The hypothesis that is built based on previous research and signaling theory is formulated as follows: H4.1: Dividend Policy strengthens the relationship between Level Interest Rate with Firm Value I. Managerial Ownership Moderates the Effect of Interest Rates on Firm Value Increasing managerial ownership helps to link the interests of managers and shareholders towards better managers making decisions and increasing firm value [18] [19]. With managerial ownership, managers in managing the company are more careful in making decisions because they share the consequences of the decisions taken when they want to increase the value of the company. The hypotheses built based on previous research and monitoring theory are formulated as follows: H4.2: Managerial Ownership strengthens the relationship between Interest Rates and Firm Value J. Free Cash Flow, Investment Opportunity Set and Firm Value Free Cash Flow reflects the company's discretion to invest, excess retained earnings are invested in the future. Investor confidence in the company, which is accompanied by investment decisions, gives a positive signal for the company's growth in the future. This increases the value of the company. The hypothesis that is built based on previous research is formulated as follows: H5: There is an indirect effect of Free Cash Flow on Company Value K. Interest Rate, Investment Opportunity Set and Firm Value The interest rate has a role as a controller for investors to set investment opportunities. When interest rates are high, it influences investors to invest their funds in stocks that provide high returns with low risk. Conversely, when interest rates are low, it influences investors to invest funds in the form of shares to provide higher yields even with a high level of risk. High interest rates increase the cost of capital borne by the company and cause the return on an investment to increase [20]. The hypothesis is as follow: H6: There is an indirect effect on interest rates to Company Value The following empirical research model is presented to determine the direct or indirect effect of free cash flow, interest rate on firm value moderated by dividend policy, and managerial ownership, based on the theoretical basis and empirical studies regarding firm value, free cash flow, interest rates, investment opportunity sets, dividend policy, and managerial ownership, as well as previous studies. As a result, the empirical research model is as follows: L. Research empirical model III. METHODOLOGY All paragraphs must be indented. All paragraphs must be justified, i.e. both left-justified and right-justified.\ A. Research Design This study employed an explanatory research design, which tries to evaluate a theory or hypothesis in order to enhance or even refute the existing research theory or hypothesis. This quantitative approach tries to collect information, data, and knowledge about topics that are not quite known. B. Population and Sampling Procedure Manufacturing companies that have been listed on the Indonesia Stock Exchange (IDX) for six years, from 2013 to 2018, are the population of interest, with purposive sampling being employed to obtain a representative sample. C. Analysis Techniques Panel data regression is the analysis technique utilized in this work, which combines time series data with cross section data to create panel data, which consists of numerous individual units observed over a period of time. There were 102 cross sections (manufacturing enterprises) in this study, with the same time series (from 2013 to 2018) or 6 (six) periods. D. Robustness Test Normality, multicollinearity, heteroscedasticity, and autocorrelation tests are used to examine the conventional assumptions in the first stage. The empirical data used in this study reveals that the research model met all of the assumptions, allowing for the interpretation of the panel data regression equation. E. Hypothesis testing The F test was performed to examine if the independent variables (X) had a significant effect on the dependent variable (Y) at the same time. Meanwhile, a t test was employed to assess the significance of the independent variables' (X) influence on the dependent variable (Y). Multiple regression analysis is used to determine whether there is a relationship between the three independent variables (X1), Market Capitalization (X2), Company Size (X3), Free Cash Flow (X3), Dividend Policy (X4), and Interest Rate (X5), and the non-independent variable (Y), the company's value. Based on statistical parameters, the t-statistics of free cash flow (X1) is 8.391543, which is greater in the t-table of 1.645 at a significance of 0.05 or a significant probability value of 0.00 is smaller at 0.05, indicating that the decision H01 is rejected, clearly showing that free cash flow has a significant effect on the value company. Meanwhile, the interest rate (X2) has a t-statistic of 3.719586, which is bigger in the t-table of 1.645 at a significance of 0.05 or a significant probability value of 0.00 is smaller at 0.05, rejecting the decision H04, means that interest rates have a significant impact on company value. When the effect of free cash flow and interest rates is moderated by dividend policy (M1) and managerial ownership (M2), the t-statistic of dividend policy (M1) is 0.315349, which smaller on the t-table of 1.645 at a significance of 0.05 or a significant probability value of 0.7526 greater than 0.05, thus the decision H3.1 is accepted, indicating that the dividend policy does not moderate the relationship between Free Cash Flow and Firm Value. The decision H3.2 is rejected because the t-statistic of managerial ownership (M2) is 2.211168, which is greater in the t- Table 2 shows that free cash flow has a considerable positive effect on firm value via the intervening variable, namely the set of investment opportunities. As a result, the collection of investment opportunities can be used as an intervening variable to examine the impact of free cash flow on company value. Table 2 shows that the interest rate has no substantial effect on company value because of the intervening variable, namely the collection of investment alternatives. Furthermore, the investment opportunity set variable has a negative impact on firm value. As a result, using the set of investment options as an intervening variable to examine the impact of interest rates on firm value is ineffective. IV. RESULTS AND DISCUSSION Six points became the findings in this investigation, and they addressed all of the hypotheses that had been created previously. The findings are explained in the following way: 1. Free cash flow has a strong negative impact on the investment opportunity set (IOS), which implies that it limits investment opportunities and makes it difficult to forecast the company's growth after growing with the IOS. High debt is used to balance the occurrence of agency costs that come with high free cash flow, especially for enterprises with poor investment prospects. High debt is used to offset the occurrence of agency expenses that come with high free cash flow. 2. Because of the high cost of capital to invest, interest rates have a large positive effect on the investment opportunity set (IOS). High rates boost investment opportunities. Interest rate changes cause minor changes in investment demand. As the interest rate rises, the expected return on an investment falls rapidly. However, if variables other than interest rates change, investment demand is predicted to shift. The interest rate on the loan is the most important component in determining the cost of investment. The higher the loan's interest rate, the more expensive the investment. 3. The impact of free cash flow on the value of a company. Free cash flow has a significant negative impact on company value, implying that unhealthy free cash flow is used in manufacturing companies, resulting in a decrease in company value. This is JEFMS, Volume 5 Issue 01 January 2022 www.ijefm.co.in Page 56 because each and economizing on operating cash and cash investment carried out on target in accordance with the budget and work program that has been determined by each department on operating cash in the form of cash receipts and cash disbursements, as well as investment cash and financing cash, can improve the effectiveness of cash management. High free cash flow promotes agency conflict and lowers company value, indicating that it has a major negative impact on firm value. When the relationship between free cash flow and firm value is regulated by dividend policy, the association between free cash flow and firm value does not improve. This suggests that when the dividend policy is regulated, the magnitude of the Free Cash Flow of manufacturing businesses in Indonesia from 2013 to 2018 cannot boost the company's worth. This is because a company's dividend policy determines whether profits are given as dividends to shareholders or are maintained in the form of retained earnings for future investment financing. If profits are distributed as dividends, the company's retained earnings will be reduced, as would the overall sources of internal funds or internal financing. When free cash flow is tempered by managerial ownership, the relationship between free cash flow and firm value is strengthened. This suggests that the magnitude of manufacturing businesses' free cash flow in Indonesia from 2013 to 2018 can boost the company's value. This managerial ownership acts as a company's internal monitoring function, reducing agency expenses and minimizing conflicts between management and shareholders. 4. Rates of interest have a substantial beneficial impact on the value of a company. This means that the interest rate isn't a direct determinant in investors' decisions to put their money into equities. Other aspects that investors consider while deciding whether or not to invest in a firm. Investors believe that the high interest rate will keep inflation at a distance. Interest rates have a strong beneficial effect on business value, according to the findings of this study. The interest rate is not a direct issue that investors should consider when deciding whether or not to invest in equities. If they put their money in the company, investors look at other variables that can be a good predictor. Investors are more concerned about the company's long-term viability. This contradicts the idea that interest rates have a substantial negative influence on company value because high interest rates lead investors to move their capital to other instruments such as bonds and other securities, allowing firm value to continue to be used to measure market performance. As a kind of return, investors are awarded shares and returns on a variety of assets. When interest rates are regulated by dividend policy, the effect of interest rates on company value is negligible. This means that when the high interest rate of manufacturing businesses in Indonesia from 2013 to 2018 is regulated by dividend policy, the company's worth is not increased since dividend policy cannot be utilized to encourage investment activities of companies with a positive net present value. Managerial ownership has a large effect on business value, which moderates the effect of interest rates on company value. 5. The availability of a model update based on the study framework's findings, specifically the indirect effect of Free Cash Flow on Firm Value with the Investment Opportunity It has a considerable favourable effect when used as a mediating variable. When a corporation has made all of the necessary investments and capital to maintain the continuity of its operations, the company's free cash flow is given to investors. 6. With the investment opportunity set as a mediating variable, the indirect influence of interest rates on firm value is examined. According to the conclusions of this study, the indirect influence of interest rates on company value has a negligible negative impact on firm value. People are more likely to save when interest rates rise, and they are less likely to invest in the actual economy. The expense of rising interest rates is passed on to investors in the form of higher interest charges for the company. People do not want to take the danger of making high-risk investments, thus they do not develop. V. CONCLUSION The company's Free Cash Flow employed for investment opportunities has no limitations, according to the conclusions of this study. The company's condition is very dynamic, and its growth is good because global economic growth is also good at 4.8 percent, and the nature of Free Cash Flow as an alternative source of funds for investment opportunities, so the indirect effect of Free Cash Flow through the Investment Opportunity Set is significant positive for the company. Free Cash Flow that would otherwise be distributed as dividends to shareholders is diverted for investment opportunities. Instead, stockholders will get a "stock dividend," which will improve their ownership status.
2022-01-10T16:04:36.810Z
2022-01-08T00:00:00.000
{ "year": 2022, "sha1": "2b20c955073e03a7789bb8f46d50cf29e7537ae4", "oa_license": null, "oa_url": "https://ijefm.co.in/v5i1/Doc/6.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fadf0dfe90141467f23093e1c2d750f8b2e3afae", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
215784314
pes2o/s2orc
v3-fos-license
Scanning Electron Microscopy Study of Retrieved Implants Suggests a Ratcheting Mechanism Behind Medial Migration in Cephalomedullary Nailing of Hip Fractures Introduction: Medial migration is the paradoxical migration of the femoral neck element (FNE) superomedially against gravity with respect to the intramedullary component of the cephalomedullary device, increasingly seen in the management of pertrochanteric hip fractures with the intramedullary nail. We postulate that the peculiar anti-gravity movement of the FNE in the medial migration phenomenon stems from a ratcheting mechanism at the intramedullary nail-FNE interface, which should inadvertently produce unique wear patterns on the FNE that can be seen with high-powered microscopy. By examining the wear patterns on retrieved implants from patients with medial migration, our study aims to draw clinical correlations to the ratcheting mechanism hypothesis. Material and Methods: Four FNEs were retrieved from revision surgeries of four patients with prior intramedullary nail fixation of their pertrochanteric hip fractures complicated by femoral head perforation. The FNEs were divided into two groups based on whether or not there was radiographic evidence of medial migration prior to the revisions. Wear patterns on the FNEs were then assessed using both scanning electron microscopy and light microscopy. Results: Repetitive, linearly-arranged, regularly-spaced, unique transverse scratch marks were found only in the group with medial migration, corresponding to the specific segment of the FNE that passed through the intramedullary component of the PFNA during medial migration. These scratch marks were absent in the group without medial migration. Conclusion: Our findings are in support of a ratcheting mechanism behind the medial migration phenomenon with repetitive toggling at the intramedullary nail-FNE interface and progressive propagation of the FNE against gravity. INTRODUCTION In recent years, load sharing devices such as fixation with intramedullary nails have gained popularity in the management of pertrochanteric hip fractures 1 . These cephalomedulary devices offer advantages such as a more efficient load transfer with the shorter lever arm, significantly less soft tissue disruption, shorter operative time, and have been shown to have superior outcomes when compared to the traditional extramedullary sliding screw devices particularly in unstable, multifragmentary fractures (AO type A2/A3) [2][3][4][5][6][7] . Medial migration is a phenomenon seen almost exclusively in the management of pertrochanteric hip fractures with the intramedullary nail (Fig. 1). This is the paradoxical migration of the femoral neck element (FNE) superomedially against gravity with respect to the intramedullary component of the cephalomedullary device, first seen in the description of the Z-effect by Werner-Tutschcku et al in their series of 70 proximal femur fractures managed with the Proximal Femoral Nail (PFN) 8 . Medial migration leads to complications with considerable morbidity including femoral head perforation, penetration into the acetabulum, destruction of the hip joint, and in some cases, migration into the pelvic cavity (Table I). This is a poorly understood phenomenon increasingly reported in the literature in the last decade with limited studies investigating the biomechanics of the phenomenon to date (Table I). Weil et al proposed that toggling is required for medial migration to occur based on consistent radiological findings of the fracture pattern involving the medial calcar and the greater trochanter seen in their case series of eight pertrochanteric hip fractures where medial migration occurred 9 . They went on to prove their hypothesis with a biomechanical model specifically engineered for toggling to occur and were successful in recreating the medial migration phenomenon in all five different nail designs tested [Synthes TFN, Synthes PFN, Synthes PFNA, Stryker Gamma-3 nail and Smith and Nephew IMHS nail] 9 . To date, there has been no retrieval studies to validate Weil et al's toggling hypothesis. We postulate that the peculiar anti-gravity movement of the FNE in the medial migration phenomenon stems from a ratcheting mechanism at the intramedullary nail-FNE interface. This allows FNE motion only in one direction while preventing motion in the opposite direction which will inadvertently produce unique wear patterns on the FNE that can be seen with high-powered microscopy as the FNE pivots on the intramedullary nail during toggling. We aim to further investigate the medial migration phenomenon and the proposed ratcheting mechanism by studying retrieved implants from patients who have undergone revision surgery as a result of the medial migration phenomenon. By examining the wear patterns on the retrieved implants and correlating these patterns with findings from serial radiographs, our study aims to draw clinical correlations to the ratcheting mechanism hypothesis. MATERIALS AND METHODS Four FNEs (cephalic blades) were retrieved from revision surgeries of four patients with prior fixation of their pertrochanteric hip fractures with the Synthes Proximal Femoral Nail Antirotation (PFNA), complicated by FNE perforation of the femoral head. Radiographic analysis of plain radiographs were performed and the FNEs were divided into two groups based on whether or not there was medial migration prior to the revisions (n=2 per group). Wear patterns on the FNEs were assessed using both scanning electron microscopy (SEM) and light microscopy. Correlations of the FNE wear patterns with findings from their corresponding radiographic analysis was then performed and compared. The use of the SEM was made in view of its potential for higher magnification and its ability to create an all-in-focus image in the viewing of a 3-dimensional specimen with significant variations in the Z-axis with postprocessing of recorded stacks of through-focus imaging to overcome the depth of field limit 10 . Extraction of the FNEs during the revision surgeries were performed in accordance to the manufacturer's recommendations for FNE removal. All FNE were extracted uneventfully using the Synthes PFNA Blade Extraction Set. Markings were made on the FNEs at regular intervals using a marker pen and numbered to facilitate orientation and localisation of specific scratch marks on the FNE. Standard preparation with application of a 15nm gold coating using a sputter coater to the surface of the FNEs was performed to facilitate visualisation and surface analysis with the SEM [FEI Quanta 650 FEG]. Viewing was performed at 10kV for all magnifications. A montage of 5x magnification was created to facilitate spatial orientation and the FNEs were reviewed systematically in segments at 40x, 80x, 160x and 300x magnification, respectively. Radiographic analysis was performed using plain radiographs with anterior-posterior (AP) and lateral views. Measurements of the medial migration distance, tip-apex distance (TAD) and identification of the specific segment of the FNE that passed through the intramedullary component of the PFNA during medial migration were made using software tools [CARESTREAM Vue Motion]. Assessment of fracture configuration and position of the tip of FNE within the nine Cleveland zones in the femoral head were also performed. The light microscope [Olympus SZX12] was used to facilitate pinpointing of the exact location of specific scratch marks on the FNEs to aid correlation with the specific segment of the FNE that passed through the intramedullary component of the PFNA during medial migration. This was performed systematically in segments at 63x and 90x magnification, respectively. RESULTS Similar longitudinal scratch marks on both the superior and inferior ridges of the FNE were seen in the retrieval specimens from all four patients. There were however, unique wear patterns present only on the FNEs from the group with medial migration corresponding to the segment of the FNE that has passed through the intramedullary component of the PFNA during medial migration (Fig. 2). These were indentations made by the pivoting action of the FNE on the intramedullary component of the PFNA at the intramedullary nail-FNE interface. Repetitive, linearly-arranged, regularly-spaced transverse scratch marks were seen on the apex of the inferior ridge of the FNE in both patients with medial migration (Fig. 3-8). These are better appreciated on higher powered magnification (300x) with more transverse scratch marks seen at varying depths at closer intervals. These consistent, characteristic scratch marks were found only in the segment The angle that these transverse scratch marks made with respect to the longitudinal axis of the FNE was consistent with the angle that the FNE made with respect to the opening of the intramedullary component of the PFNA at the intramedullary nail-FNE interface at the apex of the inferior ridge (Fig. 7). These findings were suggestive of (i) repetitive toggling at the intramedullary nail-FNE interface with scratch marks made as a result of a pivoting process at the intramedullary nail-FNE interface when the implant is under load, and (ii) progressive propagation of the FNE superomedially driven by an underlying cyclical process. No transverse scratch marks or scratch patterns unique to a particular part of the FNE was seen on both FNEs in the group without medial migration. Longitudinal scratch marks similar to those found on the FNEs in the patients with medial migration were seen on both the superior and inferior ridges extending through the whole length of the FNE. An example of these longitudinal scratch marks (Fig.9). Table II, shows a summary of the patients' demographics, fracture and fixation characteristics, and relevant time points of medial migration and surgery. The mean age was higher in the medial migration group at 85.3 years compared to the group without medial migration at 75.0 years. All patients in our study had BMI less than 20 except for one patient in the group without medial migration who had BMI 30.3. The male to female ratio in both groups were the same. All patients were Chinese. The medial migration distances seen on radiographs for our patients with medial migration were 22.3mm and 12.8mm seen at 2.6 months and 12.3 months, respectively. The timing of revision surgery were similar in both groups, with one early failure (3-4 months post index surgery) and one late failure (12 months post index surgery). The indication for revision surgery was FNE perforation of the femoral head in all cases, with penetration into the acetabulum in three of the four cases. The pattern of femoral head perforation however was different between the two groups. In the group without medial migration, superior cutout was seen in both cases with varus collapse of the proximal fracture fragment. In the group with medial migration, cut-out occurred medially in both cases in line with the axis of the femoral neck element, without rotational displacement or varus collapse of the proximal fracture fragment. Unstable fracture configurations were seen in both cases in the group with medial migration and only one case in the group without medial migration (AO/OTA 31A2.3). Comminution at the greater trochanter and an unstable medial calcar pattern were seen in these cases of unstable pertrochanteric fractures. Fig. 10 shows the serial post-operative radiographs in the medial migration group with progressive superomedial migration of the FNE leading to fixation failure with femoral head perforation, FNE penetration into the acetabulum and destruction of the hip joint. Fig. 11 shows the serial post-operative radiographs demonstrating femoral head perforation, FNE penetration into the acetabulum and varus collapse of the proximal fracture fragment in the group without medial migration. TAD were 18.9mm and 39.8mm in the group with medial migration, 15.2mm and 32.9mm in the group without medial migration. The position of the FNE tip within the femoral head were center-center and inferior-anterior in the group with medial migration, superior-center and superior-anterior in the group with medial migration. DISCUSSION Weil et al proposed that toggling is required for medial migration of the femoral neck element in the cephalomedullary device to occur based on the consistent fracture pattern involving the medial calcar and the greater trochanter seen in their case series of eight pertrochanteric hip fractures where medial migration occurred 9 . In our group with medial migration, consistent findings of an unstable pertrochanteric fracture configuration (AO/OTA 31A2.3) were found in all patients with deficits seen in both the medial calcar and the greater trochanter similar to Weil et al's case series of patients. Weil et al's toggling theory was supported by their biomechanical study where they were successful in recreating the medial migration phenomenon in all five different nail designs tested [Synthes TFN, Synthes PFN, Synthes PFNA, Stryker Gamma-3 nail and Smith and Nephew IMHS nail] with a biomechanical model specifically engineered for toggling to occur. No medial migration was seen when toggling was intentionally restricted in all of the cephalomedullary nail designs with a single femoral neck element tested. In our study with the Synthes PFNA which has a single femoral neck element, we found repetitive, linearly-arranged, regularly-spaced transverse scratch marks only on the FNEs from the group with medial migration, corresponding to the segment of the FNE that has passed through the intramedullary component of the PFNA during medial migration. These characteristic wear patterns were indentations made by the pivoting action of the FNE on the intramedullary component of the PFNA at the intramedullary nail-FNE interface, suggestive of repetitive FNE toggling and progressive migration of the FNE driven by an underlying cyclical process, in support of Weil et al's toggling theory. The longitudinal scratch marks found common to all retrieval FNEs may have been made during the insertion of the FNE with the hammer or during the removal of the FNE with the slotted hammer. Medial migration was also observed in dual lag screw intramedullary nail systems, in the Z-effect phenomenon 8 . Interestingly, preventing nail toggle did not prevent medial migration of the distal FNE when two femoral neck implants [Synthes PFN] were used in Weil et al's study, suggesting that the mechanism of migration in two-screw devices may be different 9 . Migration was prevented only with clamping of the nail and removal of the superior neck element 9 . Cephalomedullary nail fixation devices have a significantly lower primary cut-out rate compared to extramedullary devices [11][12][13][14][15][16][17] Despite being more resistant to cut-out, femoral head cut-out remains as the most common complication of cephalomedullary nail fixation in the management of pertrochanteric hip fractures of which the majority is believed to be the result of biomechanical failure 11,12 19 . All cut-outs occurred had TAD either less than 20mm or more than 30mm. No cut-outs occurred with TAD between 20-30mm. Nikoloski et al proposed that the helical blade behaves differently to a screw, and that placement too close to the subchondral bone may lead to penetration through the head. Our study findings were similar to Nikoloski's study with all cut-outs occurring either less than 20mm or more than 30mm in both groups. Based on the wear patterns seen in our study unique to the group with medial migration, we postulate that medial migration requires two criteria to occur: (i) toggling, and (ii) propagation of the femoral neck element medially with respect to the proximal fracture fragment. This is a progressive process where perforation of the femoral head and acetabulum occurs during the compression phase (e.g. single leg stance), and propagation of the femoral neck element medially occurs during the tension phase (e.g. when the lower limb is lifted off the ground). Fig. 12 shows a diagrammatic representation of the postulated mechanism. The clockwise moment of the femoral neck element during the compression phase prevents lateral migration of the femoral element allowing perforation of the femoral head to occur while the anticlockwise moment of the femoral element during the tension phase results in propagation of the femoral neck element medially with respect to the intramedullary component. This leads to FNE motion only in one direction while preventing motion in the opposite direction similar to a ratcheting mechanism and would account for the anti-gravity movement of the FNE seen in the medial migration phenomenon. Given that toggling of the FNE is a bi-directional movement, potential driving factors behind this progressive process of medial migration could include activities that involve repeated cycles of loadingunloading at the hip joint such as during gait, transfers or stance changes. This is predisposed in the setting of unstable pertrochanteric fracture configurations with risk factors including (i) comminution at the greater trochanter resulting in the lack of a proximal lateral buttress for the intramedullary nail, (ii) insufficiencies at the medial calcar either from an unstable fracture pattern or from poor reduction and (iii) fracture non-union. 26 . Interestingly, this pattern of FNE tip placement resulting in cut-outs was seen only in the group without medial migration in our study. In the group with medial migration, cut-outs occurred despite FNE tip placement in low risk positions. This is especially significant as it includes the center-center position, commonly thought to convey the highest resistance to cut-outs 22,24,26 . Larger studies will be useful in the assessment of whether the cutout risk with respect to FNE tip positioning in medial migration follows the regular pattern seen in cut-outs given the unique underlying pathophysiology in medial migration. Our study is the first retrieval study in the literature investigating the medial migration phenomenon. With highpowered magnification and the scanning electron microscopy's ability to create an all-in-focus image in the analysis of the 3-dimensional FNE specimen, we were able to perform a detailed and comprehensive examination of the wear patterns on the FNE specimens to verify the toggling mechanism hypothesis. The consistent, unique wear patterns found on the retrieved FNE specimens exclusive to the medial migration phenomenon serves as strong evidence in support of Weil et al's toggling mechanism hypothesis, in line with the radiological and biomechanical findings from their study. One limitation in our study is our small sample size. Despite the increasing number of cases reported in the last decade, medial migration remains poorly recognised to date and retrieval specimens are difficult to acquire. Although our study findings are convincing of repetitive toggling occurring at the intramedullary nail-FNE interface with progressive migration of the FNE in the medial migration phenomenon, evidenced by unique, consistent wear patterns present only in the specific segment of the FNE that passed through the intramedullary component of the PFNA during medial migration, retrieval studies with larger sample sizes will be useful in confirming our findings. With toggling being a bi-directional process predisposed by an unstable pertrochanteric fracture configuration, and progressive FNE migration likely driven by an underlying cyclical process, biomechanical studies with bi-directional cyclic loading at the hip joint may be useful in investigating the role of loading-unloading at the hip in the medial migration phenomenon, particularly in unstable pertrochanteric fractures. CONCLUSION The wear patterns found on the FNE with medial migration are in support of repetitive FNE toggling and progressive migration of the FNE, driven by an underlying cyclical process. Coupled with radiological findings of a onedirection motion of the FNE superomedially against gravity, our study findings are suggestive of a ratcheting mechanism exclusive to the medial migration phenomenon. FUNDING/SUPPORT STATEMENT This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. CONFLICT OF INTEREST On behalf of all authors, the corresponding author states that there is no conflict of interest.
2020-04-16T09:13:29.711Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "7163f4e2b4e4a5425ab81335c81077f5014edd49", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc7156168?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9e1138f74b1decc1ffebb0ee363104b4c453494f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38375575
pes2o/s2orc
v3-fos-license
Photometric redshifts in the SWIRE Survey We present the SWIRE Photometric Redshift Catalogue, 1025119 redshifts of unprecedented reliability and accuracy. Our method is based on fixed galaxy and QSO templates applied to data at 0.36-4.5 mu, and on a set of 4 infrared emission templates fitted to infrared excess data at 3.6-170 mu. The code involves two passes through the data, to try to optimize recognition of AGN dust tori. A few carefully justified priors are used and are the key to supression of outliers. Extinction, A_V, is allowed as a free parameter. We use a set of 5982 spectroscopic redshifts, taken from the literature and from our own spectroscopic surveys, to analyze the performance of our method as a function of the number of photometric bands used in the solution and the reduced chi^2. For 7 photometric bands the rms value of (z_{phot}-z_{spec})/(1+z_{spec}) is 3.5%, and the percentage of catastrophic outliers is ~1%. We discuss the redshift distributions at 3.6 and 24 mu. In individual fields, structure in the redshift distribution corresponds to clusters which can be seen in the spectroscopic redshift distribution. 10% of sources in the SWIRE photometric redshift catalogue have z>2, and 4% have z>3, so this catalogue is a huge resource for high redshift galaxies. A key parameter for understanding the evolutionary status of infrared galaxies is L_{ir}/L_{opt}, which can be interpreted as the specific star-formation rate for starbursts. For dust tori around Type 1 AGN, L_{tor}/L_{opt} is a measure of the torus covering factor and we deduce a mean covering factor of 40%. ABSTRACT We present the SWIRE Photometric Redshift Catalogue, 1025119 redshifts of unprecedented reliability and of accuracy comparable with or better than previous work. Our methodology is based on fixed galaxy and QSO templates applied to data at 0.36-4.5 µm, and on a set of 4 infrared emission templates fitted to infrared excess data at 3.6-170 µm. The galaxy templates are initially empirical, but are given greater physical validity by fitting star-formation histories to them, which also allows us to estimate stellar masses. The code involves two passes through the data, to try to optimize recognition of AGN dust tori. A few carefully justified priors are used and are the key to supression of outliers. Extinction, A V , is allowed as a free parameter. The full reduced χ 2 ν (z) distribution is given for each source, so the full error distribution can be used, and aliases investigated. We use a set of 5982 spectroscopic redshifts, taken from the literature and from our own spectroscopic surveys, to analyze the performance of our method as a function of the number of photometric bands used in the solution and the reduced χ 2 ν . For 7 photometric bands (5 optical + 3.6, 4.5 µm) the rms value of (z phot − z spec )/(1 + z spec ) is 3.5%, and the percentage of catastrophic outliers (defined as > 15% error in (1+z) ), is ∼ 1%. These rms values are comparable with the best achieved in other studies, and the outlier fraction is significantly better. The inclusion of the 3.6 and 4.5 µm IRAC bands is crucial in supression of outliers. We discuss the redshift distributions at 3.6 and 24 µm. In individual fields, structure in the redshift distribution corresponds to clusters which can be seen in the spectroscopic redshift distribution, so the photometric redshifts are a powerful tool for large-scale structure studies. 10% of sources in the SWIRE photometric redshift catalogue have z >2, and 5% have z>3, so this catalogue is a huge resource for high redshift galaxies. A key parameter for understanding the evolutionary status of infrared galaxies is L ir /L opt . For cirrus galaxies this is a measure of the mean extinction in the interstellar medium of the galaxy. There is a population of ultraluminous galaxies with cool dust and we have shown SEDs for some of the reliable examples. For starbursts, we estimate the specific star-formation rate, φ * /M * . Although the very highest values of this ratio tend to be associated with Arp220 starbursts, by no means all ultraluminous galaxies are. We discuss an interesting population of galaxies with c 0000 RAS INTRODUCTION The ideal data-set for studying the large-scale structure of the universe and the evolution and starformation history of galaxies is a large-area spectroscopic redshift survey. The advent of the Hubble Deep Field, the first of a series of deep extragalactic surveys with deep multiband photometry but no possibility of determining spectroscopic redshifts for all the objects in the survey, has led to an explosion of interest in photometric redshifts (Lanzetta et al 1996, Mobasher et al 1996, Gwyn and Hartwick 1996, Sawicki et al 1997, Mobasher et al and Mazzei 1998, Arnouts et al 1999, Fernando-Soto et al 1999, 2002, Benitez 2000, Fontana et al 2000, Bolzonella et al 2000, Thompson et al 2001, Teplitz et al 2001, Le Borgne and Rocca-Volmerange 2002, Firth et al 2002, Chen and Lanzetta 2003, Rowan-Robinson 2003, Wolf et al 2004, Rowan-Robinson et al 2004, Babbedge et al 2004, Mobasher et al 2004, Vanzella et al 2004, Benitez et al 2004, Collister and Lahav 2004, Gabasch et al 2004, Hsieh et al 2005, Ilbert et al 2006, Brodwin et al 2006. Large planned photometric surveys, both space-and ground-based, make these methods of even greater interest. Here we apply the template-fitting method of Rowan-Robinson (2003), Rowan-Robinson et al (2004, Babbedge et al (2004), to the data from the SPITZER-SWIRE Legacy Survey (Lonsdale et al 2003). We have been able to make significant improvements to our method. Our goal is to derive a robust method which can cope with a range of optical photometric bands, take advantage of the SPITZER IRAC bands, give good discrimination between galaxies and QSOs, cope with extinguished objects, and deliver accurate redshifts out to z = 3 and beyond. With this paper we supply photometric redshifts for 1,025,119 infrared-selected galaxies and quasars which we believe are of unparalleled reliability, and of accuracy at least comparable with the best of previous work. The areas from the SWIRE Survey in which we have optical photometry and are able to derive photometric redshifts are: (1) 8.72 sq deg of ELAIS-N1, in which we have 5-band (U'g'r'i'Z') photometry from the Wide Field Survey (WFS, McMahon et al 2001, Irwin and, (2) 4.84 sq deg of ELAIS-N2, in which we have 5-band (U'g'r'i'Z') photometry from the WFS, (3) 7.53 sq deg of the Lockman Hole, in which we have 3-band photometry (g'r'i') from the SWIRE photometry programme, with U-band photometry in 1.24 sq deg, (4) 4.56 sq deg in Chandra Deep Field South (CDFS), in which we have 3-band (g'r'i') photometry from the SWIRE photometry programme, (5)6.97 sq deg of XMM-LSS, in which we have 5-band (UgriZ) photometry from Pierre et al (2007). In addition within XMM we have 10-band photometry (ugrizUBVRI) from the VVDS programme of McCracken et al (2003), Le Févre et al (2004) (0.79 sq deg), and very deep 5-band photometry (BVRi'z') in 1.12 sq deg of the Subaru XMM Deep Survey (SXDS, Sekiguchi et al 2005, Furusawa et al 2008. The SWIRE data are described in Surace et al (2004Surace et al ( , 2008. Apart from the use of flags denoting objects that are morphologically stellar at some points in the code (see section 2), we make no other use of morphological information in determining galaxy types. So reference to 'ellipticals' etc refers to classification by the optical spectral energy distribution. PHOTOMETRIC REDSHIFT METHOD In this paper we have analysed the catalogues of SWIRE sources with optical associations with the IMPz code (Rowan-Robinson 2003 (RR03), Rowan-Robinson et al 2004, Babbedge et al 2004) which uses a small set of optical templates for galaxies and AGN. The galaxy templates were based originally on empirical spectral energy distributions (SEDs) by Yoshii and Takahara (1988) and Calzetti and Kinney (1992) but have been subsequently modified in Rowan-Robinson (2003), Rowan-Robinson et al (2004 and Babbedge et al (2004). Here we have modified the templates further (section 3), taking account of the large VVDS sample with spectroscopic redshifts and good multiband photometry. The other main changes we have made to the code described by Rowan-Robinson (2003), Babbedge et al (2004) are: • The code involves two passes through the data. In the first pass we use 6 galaxy templates and 3 AGN templates (see section 3) and each source selects the best solution from these 9 templates. The redshift resolution is 0.01 in log10(1 + z). Quasar templates are permitted only if the source is flagged as morphologically stellar. 3.6 and 4.5 µm data are only used in the solution if log10(S(3.6)/Sr) < 0.5: this is to avoid distortion of the solution by contributions from AGN dust tori in the 3.6 and 4.5 µm bands. Longer wavelength Spitzer bands are not used in the optical template fitting. • Before commencing the solution we create a colour table for all the galaxy and QSO templates, for each of the photometric bands required, for 0.002 bins in log10(1+z), from 0-0.85 (z=0-6). This involves a full integration over the profile of each filter, and calculation of the effects of intergalactic absorption, for each template and redshift bin, as described in Babbedge et al (2004). Prior creation of a colour table saves computational time. • After pass one we fit the Spitzer bands for which there is an excess relative to the starlight or QSO solution with one of four infrared templates: cirrus, M82 starburst, Arp 220 starburst and AGN dust torus, provided there is an excess at either 8 or 24 µm (cf Rowan-Robinson et al 2004). 3.6 and 4.5 µm data are not used in the second pass redshift solution if the emisision at 8 µm is dominated by an AGN dust torus. • In the second pass we use a redshift resolution of 0.002 in log10(1 + z). Galaxies and quasars are treated separately and for galaxies the limit on log10(S(3.6)/Sr) for use of 3.6 and 4.5 µm data in the solution is raised to 2.5, which comfortably includes all the galaxy templates for z <6. • In pass 2 we use two elliptical galaxy templates, one corresponding to an old (12 Gyr) stellar population (which is also used in the first pass) and one corresponding to a much more recent (1 Gyr) starburst (Maraston et al 2005). For z > 2.5 we permit only the latter template. • In pass 2 we interpolate between the 5 spiral and starburst templates to yield a total of 11 such templates. Finer interpolation between the templates did not improve the solution. • Extinction up to AV = 1.0 is permitted for galaxies of type Sbc and later, ie no extinction for Sab galaxies or ellipticals. For quasars the limit on the permitted extinction is AV = 0.3, unless the condition S(5.8) > 1.2 S(3.6) is satisfied, in which case we allow extinction up to AV = 1.0. This condition is a good selector of objects with strong AGN dust tori (Lacy et al 2004). If we allow AV up to 1.0 for all QSO fits then we find serious aliasing with low redshift galaxies. The assumptions made here allow the identification of a small class of heavily reddened QSOs which is clearly of some interest. A few galaxies and quasars might find a better fit to their SEDs with AV > 1 but we found that allowing this possibility resulted in a significant increase in aliasing and a degradation of the overall quality of the redshift fits. A separate search will be needed to identify such objects. As in Rowan-Robinson et al (2004) a prior assumption of a power-law distribution function of AV is introduced by adding 1.5AV to χ 2 ν . This avoids an excessive number of large values of AV being selected. For galaxies we used a standard Galactic extinction. For QSOs we used an LMC-type extinction, following Richards et al (2003). • As in our previous work there is an important prior on the range of absolute B magnitude, essentially corresponding, for galaxies, to a range in stellar mass. Because there is strong evolution in the massto-light ratio between z = 0 and z = 2.5, the MB limit needs to evolve with redshift. We have assumed an upper luminosity limit for galaxies of MB = -22.5-z for z <2.5, =-25.0 for z > 2.5, and a lower limit of MB = -17.0-z for z <2.5, =-19.5 for z > 2.5. For quasars we assume the upper and lower limits on luminosity correspond to MB = -26.7 and -18.7 (see section 4 and Figs 2 for a justification of these assumptions). We do not make any prior assumption about the shape of the luminosity function. Strictly speaking it would be better to limit the near infrared luminosity to constrain the range of stellar mass, but we found that by specifying a limited range of MB we were also eliminating unreasonably large star-formation rates in late type galaxies. • We have accepted the argument of Ilbert et al (2006) that it is necessary to apply a multiplicative in-band correction factor to each band to take account of incorrect photometry and calibration factors. Table 1 gives these factors for each of our areas, determined from samples with spectroscopic redshifts. • In calculating the reduced χ 2 ν for the redshift solution we use the quoted photometric uncertainties, but we set a floor to the error in each band, typically 0.03 magnitudes for g,r,i, 0.05 magnitudes for u,z, 1.5 µJy for 3.6 µm, 2.0 µJy for 4.5 µm. This also implies a maximum weight given to any band in the solution. Without this assumption, there would be cases where a band attracted unreasonably high weight because of a spuriously low estimate of the photometric error. We have not used some of the other priors used by Ilbert et al (2006), for example use of prior information about the redshift distribution, nor their very detailed interpolation between templates. We have found that while carefully chosen priors are essential to eliminate aliases, which are the main cause of outliers, it is also crucial to keep the code as simple as possible since unnecessary complexity creates new outliers. Aperture matching between bands is a crucial issue for photometric redshifts and we have proceeded as follows. At optical wavelengths we use colours determined in a point-source aperture, but then apply an aperture correction to all bands defined in the rband by the difference between the Sextractor magauto and the point-source magnitude. For the IRAC bands we use 'ap-2' magnitudes, corresponding to a 1.9" radius aperture, except that if the the source is clearly extended, defined by Area(3.6)>250, we use Kron magnitudes. For the 2MASS J,H,K magnitudes we use the extended source catalogue magnitude in a 10" aperture, if available, and the point-source catalogue magnitude otherwise. For 80% of sources with 2MASS associations the JHK magnitudes improve the fit slightly, but for 20% the addition of the JHK magnitudes makes the χ 2 for the solution significantly worse. We found the same problem for associations with the UKIDSS DXS survey in EN1, with the corresponding proportions being 70:30. The issue is almost certainly one of aperture matching. We have not included JHK data in the analysis of the performance of the photometric redshift method given here. The requirement for entry to the code is that a source be detected in at least one SWIRE band and in at least one optical band out of gri. We are then able to determine redshifts for ∼ 95% of sources, except in EN1, where the figure drops to 85%. The additional Galaxy Templates The choice of the number and types of template to use is very important. If there are too few templates, populations of real sources will not be represented, whilst if there are too many there will be too much opportunity for aliases and degeneracy. The IMPZ code uses six galaxy templates (RR03, B04 and Babbedge et al 2006); E, Sab, Sbc, Scd, Sdm and starburst galaxies. These six templates (or similar versions) have been found to provide a good general representation of observed galaxy SEDs. The original empirical templates used in RR03 were adapted from Yoshii and Tokahari (1988), apart from the starburst template, which was adapted from observations by Calzetti and Kinney (1992). These templates have been subsequently modified, as described below, to improve the photometric redshift results. In B04 those empirical templates were regenerated to higher resolution using Simple Stellar Populations (SSPs), each weighted by a different SFR and extinguished by a different amount of dust, AV . This procedure, based on the synthesis code of Poggianti (2001), gave the templates a physical validity. Minimization was based on the Adaptive Simulated Annealing algorithm. Details on this algorithm and on the fitting technique are given in Berta et al (2004). These templates were used by IMPZ in Babbedge et al (2006) in order to obtain photometric redshifts and calculate luminosity functions for IRAC and MIPS sources in the ELAIS N1 field of the SWIRE survey. We have now improved these templates further in a two stage process using the rich multiwavelength photometry and spectroscopic redshifts in the CFH12K-VIRMOS survey (LeFevre et al 2004): a deep BVRI imaging survey conducted with the CFH-12K camera in four fields. Additionally, there are U band data from the 2.2 m telescope in La Silla, J and K data from the NTT and IRAC and MIPS photom-etry from the SWIRE survey. The spectroscopic redshifts come from the follow-up VIRMOS-VLT Deep Survey (VVDS: LeFevre 2003), a large spectroscopic follow-up survey which overlaps with the SWIRE data in the XMM-LSS field. The spectroscopic sample is sufficiently large (5976 redshifts) and wide-ranging in redshift (0.01<z<4) to allow a detailed comparison between the template SEDs and observed SEDs. We first find the best-fitting template SED for each VVDS source with the photometric redshift set to the spectroscopic redshift (for sources with i<22.5, nzflag=3 or 4 (Le Févre et al 2005, Gavignaud et al 2006, ie reliable galaxy redshifts). We typically find ∼100-400 sources for each of the six galaxy templates (after discarding those which do not obtain a fit with reduced χ 2 ν <5). Comparison of the renormalised, extinction-corrected rest-frame fluxes from each set of sources to their bestfit galaxy template then highlights potential wavelength ranges of the template that fail to reproduce well the observed fluxes (typically from ∼1200Å to >5µm). This comparison is shown in Fig. 1. We then adapt those regions of the templates to follow the average track shown by the observed datapoints. These adapted templates are once again empirical and in order to recover their physical basis we then reproduce them via the same SSP method as used in B04. These final templates are then both physically-based and provide a very good representation of the observed sources from the VVDS survey. For a comparison see Fig 1 and for details of the SSPs and their contribution to each template see Table 2. Ellipticals In addition to our standard elliptical template we now include (in the second pass) a young elliptical template. This young elliptical is based on a 1Gyr old SSP provided by Claudia Maraston (2005). A significant improvement for high redshift galaxy studies is the treatment in this template of thermally pulsing the asymptotic giant branch (TP-AGB) phase of stellar evolution, particularly in the near-IR. Maraston (2006) has demonstrated that this phase has a strong contribution to the infrared SED of galaxies in the high redshift Universe (z∼2). Because of the limited time for evolution available at higher redshifts, we restrict ellipticals to just this template for z > 2.5. The Maraston elliptical template did not have a 'UV bump' but since we have found that our higher redshift ellipticals did exhibit some UV emission, we have replaced the template shortward of 2100Å with the UV behaviour of our old (12Gyr) elliptical. We are not able to say whether this UV upturn is due to a recent burst of star formation (but see section 8 for some hint that this may be relevant), to horizontal branch stars, or to binary star evolution. Additionally, as the TP-AGB stellar templates incorporated into the evolutionary population synthesis of Maraston (2006) are empirical, they do not extend longward of 3µm so we have followed a similar procedure as above to derive the behaviour at λ > 3µm. An important benefit of these stellar synthesis fits to our templates is that we can estimate stellar masses for all SWIRE galaxies (see section 8). AGN Templates As well as galaxy templates, the inclusion of a number of different AGN templates has been considered to allow the IMPZ code to identify quasar-type objects as well as normal galaxies. B04 tested the success of including the the SDSS median composite quasar spectrum (VandenBerk 2001) and red AGN templates such as the z = 2.216 FIRST J013435.7-093102 source from Gregg (2002) but found that two simpler AGN templates were most useful. These were based on the mean optical quasar spectrum of Rowan-Robinson (1995), spanning 400Å to 25µm. For wavelengths longer than Lyα the templates are essentially α λ ≈ −1.5 power-laws, with slight variations included to take account of observed SEDs of ELAIS AGN (Rowan-Robinson et al 2004). Following a number of further tests and template alterations, we now make use of updated versions of these two AGN templates: AGN1 has been improved (now called RR1v2) in a similar procedure as applied in section 3.1 using the spectroscopic dataset to indicate regions of poor agreement between the template and photometry; AGN2 (now RR2v2) has also been modified to create a third template, RR2v2lines, by adding Lyα and CIV emission lines and the whole template then adapted via comparison to photometry (as with the other templates). This means IMPZ makes use of three AGN templates, whose main difference is the presence/lack of Lyα and CIV emission, as well as the amount of flux longward of 1µm. The numbers of quasars selecting the 3 templates are 6568 (0.65%) (RR1v2), 5253 (0.52%) (RR2v2) and 5853 (0.58%) (RR2v2lines). Use of a template like that derived by 1Richards et al (2003) from SDSS quasars, which has very strong emission lines, generated many more aliases. Table 2. The six galaxy templates and the SSPs that were used to create them. SFR is scaled to give a total mass of 1E11M ⊙ . Justification of priors on MB Figures 2 show the absolute B magnitude versus spectroscopic redshift for galaxies and quasars. The photometric redshift code was run with z forced to be equal to the spectroscopic redshift, but other parameters (template type, extinction) allowed to vary, and MB then calculated for the best solution. The assumed limits on MB are shown and look reasonable. The limit at the low end does exclude some very low luminosity galaxies and for these the photometric redshift would be biassed to slightly higher redshift. However lowering the lower boundary results in low-redshift aliases for galaxies whose true redshift is much higher. The increase of maximum luminosity with redshift reflects the strong evolution in mass-to-light ratio with redshift since z∼2. Van der Wel et al (2005) find dln(M * /LK ) ∼ −1.2z and dln(M * /LB) ∼ −1.5z for 0 < z < 1.2 for early-type galaxies. We needed to continue this trend to redshift 2.5 to accomodate some of the z ∼ 3 galaxies found by Berta et al (2007), based on the IRAC 'bump' technique of Lonsdale et al (2008, in preparation). At higher redshifts the maximum luminosity should decline, reflecting the accumulation of galaxies, but we have not tried to model that in detail here. The apparent increase in the maximum luminosity with redshift may also be in part due to the increasing volume being sampled as redshift increases. Impact of number of photometric bands available To analyze the impact of the number of optical photometric bands available, and of what improvement is afforded by being able to use the SPITZER 3.6 and 4.5 µm data, we have carried out a detailed analysis of the spectroscopic sample from the VVDS survey (Lefevre et al 2004). Figures 3-7 show results from our code, paired so that the left plot in each case is the result for optical data only, while the right plots show the impact of including 3.6 and 4.5 µm data in the solution. Note that there are some additional objects on the left-hand plots which are undetected at 3.6 or 4.5 µm. Figures 3L, 4L, 5L, 6L, 7L show a comparison of log10(1 + z phot ) versus log10(1 + zspec) for VVDS using 10 (ugrizUBVRI), 5 (ugriz), 4 (ugri or griz) and 3 (gri) optical bands without using 3.6 and 4.5 µm data. Figs 3R, 4R, 5R, 6R show corresponding plots when 3.6 and 4.5 µm data are used. The latter show a dramatic reduction in the number of outliers. Inclusion of 3.6 and 4.5 mum bands has a more significant effect than Increasing the number of optical bands from 5 to 10. Absence of the U band does significantly worsen perfomance, especially at z < 1, but with gri + 3.6, 4.5 there is still an acceptable performance. Fig 7R shows log10(1 + z phot ) versus log10(1 + zspec) for the whole SWIRE survey requiring at least 4 bands in total, S(3.6) > 7.5µJy, r <23.5, 4 photometric bands (which could be, say, gri+3.6 µm) is the minimum number for reliable photometric redshifts. Figs 8 show the corresponding plots in the SWIRE-EN1 and SWIRE-Lockman areas, where we have carried out programs of spectroscopy (Trichas et al 2007, Berta et al 2007, Owen and Morrison 2008, for a minimum of 6 photometric bands in total, and Fig 9 shows the same plot for the whole of SWIRE, requiring a minimum of 6 photometric bands. We see that our method gives an excellent performance for galaxies out to z = 1 (and beyond). Demanding more photometric bands discriminates against high redshift objects, which will start to drop out in short wavelength bands, and against quasars because they tend to have dust tori and there- fore the 3.6 and 4.5 µm bands are not used in the solution. In EN1 there appears to be a slight systematic overestimation of the redshift around z ∼1, by about 0.1. This is not seen in the VVDS or Lockman samples, or in the overall plot for the whole SWIRE Catalogue (Fig 9). The most likely explanation is some bias in the WFS photometry at fainter magnitudes. Dependence of performance on r and χ 2 ν The rms deviation of (z phot − zspec)/(1 + zspec) depends on the number of photometric bands, the limiting optical magnitude, and the limiting value of χ 2 (see Figs 11 and 12), but a typical value for galaxies is 4 %, a significant improvement on our earlier work. For comparison typical rms values found by Rowan-Robinson (2003) and Rowan-Robinson et al (2004) from UgriZ, JHK data alone were 9.6 and 7 %, respectively. Figs 10 show the dependence of log10(1 + z phot )/(1+zspec) on the r-magnitude and on the value of the reduced χ 2 ν . The percentage of outliers stars to increase for r > 22. Although we treat χ 2 nu > 10 as a failure of the photometric method (due for example to poor photometry or confusion with a nearby optical object), there is still a good correlation of photometric and spectroscopic redshift. Figs 11 show how the rms value of log10(1 + z phot )/(1 + zspec), σ and the percentage of outliers, η, defined as objects with |log10(1 + z phot )/(1 + zspec)| > 0.06, as a function of the total number of photometric bands. The rms values are comparable or slightly better than those of Ilbert et al (2006) derived from the VVDS optical data using the LE PHARE code. For 7 or more bands (eg UgriZ + 3.6, 4.5 µm) the rms performance is comparable to the 17-band estimates of COMBO-17 (Wolf et al 2004), significantly better than those of Mobasher et al (2004) with GOODS data, and comparable to Mobasher et al (2007). The outlier performance is significantly better than Wolf et al (2004), Ilbert et al (2006), Mobasher et al (2007) , which we attribute to the use of the SWIRE 3.6 and 4.5 µm bands. Fig 12L show the same quantities as a function of the limit on the reduced χ 2 ν , together with a histogram of χ 2 ν values, for SWIRE sources with at least 7 photometric bands. The figures for COMBO-17 data were derived from their published catalogue using exactly the same procedure as for the SWIRE data. Those for GOODS were taken from Table 1 of Mobasher et al (2004). Of course, where the number of optical bands is greater than 5, there is generally considerable overlap between the bands (typically U'griZ' plus UBVRI) , so the number of bands is not a totally fair measure of the amount of independent data being used (this applies to our analysis of the VVDS data, and to the GOODS and COMBO17 analyses). While the rms values found in several different studies are comparable, the dramatic improvement here is in the reduction of the number of outliers, when sufficient photometric bands are available. The remaining outliers tend to be due either to bad photometry in one band or to aliases. . LH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 10 optical bands (ugrizBVRI) and requiring r<23.5. Larger dots are those classified spectroscopically as galaxies, while small denotes are classified spectroscopically as quasars (but our code selects a galaxy template). RH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 10 optical bands (ugrizBVRI) + 3.6, 4.5 µm, and requiring r<23.5, S(3.6)> 7.5µJy. The tram-lines in this and subsequent plots correspond to ∆log 10 (1 + z) = ±0.06, ie ±15%, our boundary for catastrophic outliers. Figure 4. LH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 5 optical bands only (ugriz) and requiring r<23.5. RH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 5 optical bands (ugriz) + 3.6, 4.5 µm, and requiring r<23.5, S(3.6)> 7.5µJy. Performance for QSOs The photometric estimates of redshift for AGN are more uncertain than those for galaxies, due to aliassing problems, but the code is effective at identifying Type 1 AGN from the optical and near ir data. For some quasars there is significant torus dust emission in the 3.6 and 4.5 µm bands, and since the strength of this component varies from object to object, inclusion of these bands in photometric redshift determination with a fixed template can make the fit worse rather than better. We have therefore omitted the 3.6 and 4.5 µm bands if S(3.6)/S(r) > 3. Note that only 1.75 % of SWIRE sources are identified by the photometric redshift code as Type 1 AGN, and of these only 5% are found to have AV > 0.5. Figure 12R shows photometric versus spectroscopic redshift for SWIRE quasars detected in at least 4 photometric bands. One third of QSOs (53/158) have |log10(1 + z phot )/log10(1 + zspec)| > 0.10 and the Figure 5. LH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 4 optical bands only (ugri) and requiring r<23.5. RH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 4 optical bands (ugri)+ 3.6, 4.5 µm, and requiring r<23.5, S(3.6)> 7.5µJy. Figure 6. LH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 3 optical bands only (gri) and requiring r<23.5. RH: Photometric versus spectroscopic redshift for SWIRE-VVDS sources, using 3 optical bands (gri) + 3.6, 4.5 µm, and requiring r<23.5, S(3.6)> 7.5µJy. rms deviation for the remainder is 9.3%. Many of the outliers are cases where because an almost power-law SED is being fitted, the redshift uncertainty is very wide indeed. Redshift estimates for AGN can be affected by optical variability since optical photometry for different bands may have been taken at different epochs, as in the INT-WFS photometry in EN1 and EN2 (Afonso-Luis et al 2004). Richards et al (2001) and Ball et al (2007) have given similar estimates of rms and outlier rate for photometric redshifts for quasars. Because we are rarely able to use 3.6 and 4.5 µm data in the photometric redshift solution, we do not expect to have any advantage over fits using purely optical data. But it is worth emphasizing that our code appears to be effective in selecting the Type 1 AGN without any spectroscopic information. alsfr(f8.2) log10sf r, star formation rate in M⊙yr −1 almdust(f8.2) log10M dust /M⊙, dust mass in solar units chi2(85f6.2), array of reduced χ 2 ν as function of alz2, minimized over all templates, in bins of 0.01 in log10(1 + z ph ), from 0.01-0.85. Figure 13L shows the redshift distributions derived in this way for SWIRE-SXDS sources with S(3.6) > 5 µJy, above which flux the optical identifications (to r ∼ 27.5) are relatively complete, with a breakdown into elliptical, spiral + starburst and quasar SEDs based on the photometric redshift fits. Fig 13R shows the corresponding histograms for the whole SWIRE catalog. Here we have subdivided this into E, Sab, Sbc, Scd, starbursts and quasars. The latter distribution starts to cut off at slightly lower redshift, ∼1.5, because the typical depth of the optical data is r ∼24-25 instead of 27.5. REDSHIFT DISTRIBUTIONS A small secondary redshift peak appears for galaxies at z ∼3. About two-thirds of these have aliases at lower redshifts and some of these z ∼3 sources are spurious (see Fig 7R). However there is a real effect favouring detection of galaxies at z ∼3, that the starlight peak at ∼ 1µm entering the IRAC bands generates a negative K-correction at 3.6 and 4.5 µm. Spectroscopy is needed to ascertain the reality of this peak. Ellipticals cut off fairly sharply at z ∼1. At z>2 sources tend to be starbursts or quasars. The structure seen in the redshift distribution may be partly a result of redshift aliasing, but bearing in mind the estimated accuracy of these redshifts (typically 4% in (1+z)), some of the grosser features may indicate large-scale structure within the SWIRE fields. Fig 14L shows the redshift distribution in EN1, in which peaks appear at z ∼ 0.3, 0.5, 0.9, 1.1. From Fig 8L we see that the spectroscopic redshifts show clear clusters at z = 0.31, 0.35, 0.47, 0.9 and 1.1, so the peaks in Fig 14L do seem to correspond to real structure. The peak at z ∼ 0.9 corresponds to the supercluster seen by Swinbank et al (2007). Figure 14R shows the redshift distribution for SWIRE 24 µm sources with S(24) > 200µJy. Here the broad peak at redshift ∼1 is partly due to the shifting of the 11.7 µm PAH feature through the 24 µm band. 10% of sources in the SWIRE photometric redshift catalogue have z >2, and 4% have z>3, so this catalogue is a huge resource for high redshift galaxies. EXTINCTION ESTIMATES We have estimated the extinction for 456470 spiral galaxies (excluding E, Sab, Sbc) and 11538 quasars, those sources which have at least 3 optical bands available. The profile of the mean extinction for all galaxies, and for quasars, with redshift is shown in Fig 15 and follows a similar pattern to that found by Rowan-Robinson (2003). This can be understood in terms of simple star formation histories, where the gas-mass in galaxies, and star formation rate, decline sharply from z ∼ 1 to the present. At higher redshift the dust extinction in galaxies is expected to decline with redshift, reflecting the build-up of heavy elements with time. There is some tendency for aliasing between extinction and galaxy template type, and this is probably responsible for the peaks and troughs in the AV distribution. The extinctions measured here represents an excess extinction over the standard templates used, which already have some extinction present. The av-erage extinction in local spiral galaxies is AV ∼ 0.3 (Rowan-Robinson 2003b). We find that 9 % of SWIRE galaxies and 6 % of quasars have AV ≥ 0.5. A few galaxies and quasars would have found a better fit with AV > 1, but these represent < 1% of the population. The redshift solutions with extinction appear to be better than those with extinction set to zero. INFRARED GALAXY POPULATIONS Our infrared template fits, for sources with infrared excess at λ ≥ 8µm, allow us to estimate the bolometric infrared luminosity, Lir, which can be a measure of the total star-formation rate in a galaxy (if there is no contribution from an AGN dust torus). Since the optical bolometric luminosity of a galaxy, corrected for extinction, is a measure of the stellar mass , the ratio Lir/Lopt is a measure of the specific star-formation rate in the galaxy, ie the rate of starformation per unit mass in stars. However there are some caveats which should be borne in mind throughout this discussion. Firstly, uncertainties in photometric redshifts, and the possibility of catastrophic outliers, will affect the accuracy of the luminosities. Because these depend on the number of photometric bands available, they are best evaluated by reference to Figs 11. For redshifts determined from at least five bands, the rms uncertainty in log10(1 + z phot ) ≤ 0.017 and the corresponding uncertainty in luminosity at z = 0.2, 0.5, 1, 1.5 is 0.20, 0.10, 0.07 and 0.06 dex, respectively, with a few % being catastrophically wrong. Secondly, if we only have data out to 24 µm we need to bear in mind that the estimate of the infrared bolometric luminosity is uncertain by a factor of ∼2 due to uncertainties in the correct template fit (Rowan-Robinson et al 2005, Siebenmorgen andKrugel 2007;Reddy et al (2006) quote 2-3), and this applies also to derived quantities like the star-formation rate. Finally there is the issue of whether the 8-160 µm sources have been associated with the correct optical counterpart in cases of confusion. The process of bandmerging of Spitzer data in the SWIRE survey and of optical association has been described by Surace et al (2004Surace et al ( , 2007 and Rowan-Robinson et al (2005). The probability of incorrect association of 3.6-24 µm sources is very low for this survey. Similarly the incidence of multiple possible associations between 3.6 and optical sources within our chosen search radius of 1.5 arcsec is very low (< 1%). Association of 70 and 160 µm sources with SWIRE 3.6-24 µm sources is made only if there is a 24 µm detection and with the brightest 24 µm source if there are multiple associations. A more sophisticated analysis would involve distributing the far infrared flux between the different candidates. The impact of confusion on the subsequent discussion is expected to be small, though it may affect individual objects. It does not affect any of the more unusual classes of galaxy discussed below. Fig 16L shows the ratio Lir/Lopt versus Lir for our most reliable sub-sample, galaxies with spectroscopic redshift and 70 µm detections, with different colours coding sources dominated by cirrus, M82 or A220 starbursts or 'AGN dust tori. Unfortunately this sample is heavily biassed to low redshifts and does not contain many examples of ultraluminous galaxies (Lir > 10 12 L⊙), or of galaxies with Lir/Lopt > 1. In Fig 16R we show the same plot but we have dropped the requirement of spectroscopic redshift to that of highly reliable photometric redshifts (at least 6 photometric bands, reduced χ 2 < 5). We now see large numbers of ultraluminous galaxies, especially with A220 templates. We also see a small number of cool luminous galaxies, sources with Lir > 10 12 L⊙ and Lir/Lopt > 1 whose far infared spectra are fitted with a cirrus template, as was seen in the ISO ELAIS survey . We now look more closely at each infrared template type in turn. To make things more precise we have estimated the stellar mass for each galaxy, based on our stellar synthesis templates (section 3, table 2). For each galaxy we estimate the rest-frame 3.6 µm luminosity, νLν(3.6), in units of L⊙, and from our stellar synthesis models estimated the ratio (M * /M⊙)/(νLν (3.6)/L⊙), which we find to be 38. 4, 40.8, 27.6, 35.3, 18.7, 26.7, for types E, Sab, Sbc, Scd, Sdm, sb, respectively. [Note: we are measuring the 3.6 µm monochromatic luminosity in total solar units, not in units of the sun's monochromatic 3.6 µm luminosity.] We find values of M * agreeing with these within 10-20% if we base estimates on the Bband luminosity. Estimates based on 3.6 µm should be more reliable, since there is a better sampling of lower mass stars and less susceptibility to recently formed massive stars. These mass estimates would be strictly valid only for low redshift. For higher redshifts the mass-to-light estimates will be lower since for the oldest stellar populations, M/L varies strongly with age (Bruzual and Charlot 1993, see their Fig 3). This can be approximately modeled using the Berta et al (2004) synthesis fits described in section 3.1 above, with an accuracy of 10%, as (M * /M⊙)/(νLν (3.6)/L⊙)(t) = 50/[a + 1.17(t/t0) −0.6 ) where t0 is the present epoch and a = 0.15, 0.08, 0.61, 0.26, 1.44, 0.70 for SED types E, Sab, Sbc, Scd, Sdm and sb, respectively. The dust masses in the photometric redshift catalogue (section 5) have been corrected for this evolution. We have also estimated the star formation rate, using the conversion from 60 µm luminosity of Rowan-Robinson et al (1997), Rowan-Robinson (2001): φ * (M⊙yr −1 ) = 2.2 ǫ −1 10 −10 (L60/L⊙) where ǫ is the fraction of uv light absorbed by dust, taken as 2/3 [but note the discussion of far infrared emission arising from illumination by older stars by Rowan-Robinson(2003b), Bell (2003, which can result in overestimation of the star-formation rate]. The bolometric corrections at 60 µm, needed to convert Lir to L60, are 3.48, 1.67 and 1.43 for cirrus, M82 and A220 templates, respectively. Figure 17L shows Lir/Lopt versus M * for cirrus galaxies, colour-coded by optical SED type. Since the emission is due to interstellar dust heated by the general stellar radiation field, Lir/Lopt is a measure of the dust opacity of the interstellar medium. Although many galaxies with elliptical SEDs have Lir/Lopt << 1, consistent with low dust opacity, significant numbers of galaxies with elliptical SEDs in the optical seem to have values of Lir/Lopt comparable with spirals. There is a population of high mass spirals with dust opacity ≥1. Figures 18 show the specific star-formation rate, φ * /M * , versus Lir for M82 (LH) and A220 (RH) starbursts. The specific star formation rate ranges from 0.003 to 2 Gyr −1 for M82 starbursts and from 0.03-10 for A220 starbursts. It is by no means the case that ultraluminous starbursts are predominantly of the A220 type, as is often assumed in the literature. There is an interesting population of galaxies with elliptical SEDs in the optical, with associated starbursts in the ir luminosity range 10 9 − 10 11 L⊙, similar to those found by Davoodi et al (2006). Figure 17R shows log10(Ltor/Lopt) versus Ltor for objects dominated by AGN dust tori at 8 µm, with different coloured symbols for different optical SED types. Ltor/Lopt can be interpreted as ftorkopt where ftor is the covering factor of the torus and kopt is the bolometric correction that needs to be applied to the optical luminosity to acount for emission shortward of the Lyman limit. Fig 19 shows the distribution of log10(Ltor/Lopt) for galaxies and quasars with log10Ltor > 11.5, z < 2, which is an approximately volume-limited sample. The distribution for QSOs is well-fitted by a Gaussian with mean -0.10, and standard deviation 0.26. Assuming kopt ∼ 2, as implied by the composite QSO templates of Telfer et al (2002) and Trammell et al (2007), we deduce that the mean value of the dust covering factor ftor for quasars is 0.40. The 2-σ range would be 0.1-1.0 We can use this mean value Figure 16. LH: Ratio of infrared to optical bolometric luminosity, log 10 (L ir /Lopt), versus 1-1000 µm infrared luminosity, L ir for SWIRE galaxies and quasars with spectroscopic redshifts and 70 µm detections. RH: Ratio of infrared to optical bolometric luminosity, log 10 (L ir /Lopt), versus 1-1000 µm infrared luminosity, L ir for SWIRE galaxies and quasars with photometric redshifts determined from at least 6 photometric bands and with reduced χ 2 <5 and with 24 and 70 µm detections. Figure 17. LH: Ratio of infrared to optical bolometric luminosity, log 10 (L ir /Lopt), versus stellar mass, M * for SWIRE galaxies with photometric redshifts determined from at least 6 photometric bands and with reduced χ 2 <5 and with 24 µm detections, with infrared excess fitted by cirrus template. RH: Ratio of dust torus luminosity to optical bolometric luminosity, log 10 (Ltor/Lopt), versus dust torus luminosity, Ltor, for SWIRE galaxies with photometric redshifts determined from at least 6 photometric bands and with reduced χ 2 <5 and with 24 µm detections, with infrared excess fitted by AGN dust torus template. to infer the approximate luminosity of the underlying QSO in the galaxies with AGN dust tori, since LQSO ∼ 2.5Ltor, and hence deduce that most of the galaxies with Ltor/Lopt > 2.5 should be Type 2 objects, since the implied QSO luminosity, if it was being viewed face-on, would be sufficient to outshine the host galaxy. There are 413 such galaxies in this distribution. QSOs with Ltor/Lopt > 2 should also be Type 2 objects, although they must be presumed to be cases similar to SWIRE J104409.95+585224.8 (Polletta et al 2006), where the optical light is scattered light representing only a fraction of the total intrinsic optical output. There are 84 of these. These 497 Type 2 objects can be compared with the 796 QSOs with Ltor/Lopt < 2, which are Type 1 objects, Figure 18. LH: Specific star-formation rate, φ * /M * , in Gyr −1 , versus stellar mass, M * , for SWIRE galaxies with photometric redshifts determined from at least 6 photometric bands and with reduced χ 2 <5 and with 24 µm detections, with infrared excess fitted by M82 template. RH: Specific star-formation rate, φ * /M * , in Gyr −1 , versus stellar mass, M * , for SWIRE galaxies with photometric redshifts determined from at least 6 photometric bands and with reduced χ 2 <5 and with 24 µm detections, with infrared excess fitted by A220 template. yielding a Type 2 fraction of 0.41, in good agreement with the covering factor deduced above. Most of the galaxies with log10Ltor < 11.5 in Fig 17R tend to lie at progressively lower values of Ltor/Lopt than the QSOs, by an amount that increases towards lower Ltor, consistent with an increasing contribution of starlight to Lopt. Figure 20 shows spectral energy distributions for examples of two populations of interest. Fig 20L shows the SEDs of 5 galaxies with good quality photometric redshifts (at least 6 photometric bands, reduced χ 2 < 5, and 24 and 70 µm detections (in 2 cases also 160 µm), with the infrared template fitting requiring a very luminous cool component (> 10 12 L⊙). The luminosity in this cool component is clearly substantially greater than the optical bolometric luminosity, suggesting that the optical depth of the interstellar medium in these galaxies is > 1 (Efstathiou andRowan-Robinson 2003, Rowan-Robinson et al 2004). Figure 20R shows the SEDs of 4 galaxies with elliptical galaxy template fits in the optical, spectroscopic redshifts and Lir > Lopt. These are galaxies that in optical surveys would be classified as red, early-type galaxies. In the infrared two are fitted by cirrus components (objects 1 and 3), one by an Arp220 starburst (object 4) and one by a combination of an AGN dust torus and an M82 starburst (object 2). Since highly extinguished starbursts like Arp220, which are probably the product of a major merger, do look like ellipticals in the optical because the optical light from the young stars is almost completely extinguished, object 4 is consistent with being a highly obscured starburst. For object 2, an elliptical galaxy SED in the optical coupled with an AGN dust torus in the mid infrared implies the presence of a Type 2 QSO, in which the QSO is hidden behind the dust torus. The infrared SED of this object (object 2) also shows evidence for strong star formation, so this also appears to be a case of an obscured starburst. The two galaxies with elliptical-like SEDs in the optical and with strong cirrus components (objects 1 and 3) are harder to understand. If the emission is from interstellar dust and the dust were being illuminated solely by the old stellar population, then a high optical depth would be implied by the fact that Lir > Lopt, but the optical SED shows no evidence of extinction. The implication is that they are probably also obscured starbursts, perhaps with star formation extended through the galaxy to account for the form of the infrared SED. To see whether the morphology can help the interpretation, we show in Fig 21 IRAC images of these 4 galaxies, colour-coded as blue for 3.6 µm and red for 8 µm. Objects 1 and 2 look elliptical and object 3 looks lenticular. All three are strongly reddened in their outer parts, indicating that the long wavelength radiation is coming from the outer parts of the galaxies. This suggests that the infrared emission is associated with infalling gas and dust. Object 4 appears more compact, consistent with being similar to Arp 220. Figure 22R shows the specific star formation rate versus stellar mass for 4135 SWIRE galaxies with 1.5 < z phot < 2.5, 24 µm detections and ir excesses in at least two bands, and reduced χ 2 ν < 5. This can be contrasted with Fig 15 of Reddy et al (2006), based on 200 galaxies identified by Lyman drop-out and nearinfrared excess techniques. SWIRE is clearly a source Figure 19. Histogram of log 10 (Ltor/Lopt) for QSOs (blue, solid) and galaxies (red, broken). of high redshift galaxies with infrared data which far exceeds previous optical-based samples. Compared with the Reddy et al (2007) we note that in the range log10M * /M⊙ ∼ 11 − 12), we find a far larger range of specific star formation rate. Because the depth of our optical surveys is typically r ∼ 23.5-25, we do not sample the low-mass end of the galaxy distribution at these redshifts as well as Reddy et al (2006). Figure 22L shows the same plot for z < 0.5. We see that at the present epoch specific star formation rates > 1Gyr −1 are rare, whereas they seem to have been common, at least amongst massive galaxies, at z ∼ 2. At low redshifts we see some evidence for 'downsizing', in that elliptical galaxies, which tend to be of higher mass, have significantly lower specific starformation rates than lower mass galaxies, which tend to have late-type spiral or starburst spectral energy distributions. Figure 23L shows the star formation rate φ * against redshift. Although the distribution is subject to strong selection effects, with the minimum detectable φ * increasing steeply with redshift to z ∼ 2 and then decreasing towards higher redshift because of the negative K-correction at 8 µm, the plot is consistent with a star-formation history in which the star formation rate increases from z = 0-1.5 and then remains steady from z =1.5-3. We can use our radiative transfer models for cirrus, M82 and A220 starburst components to estimate the approximate dust mass for each galaxy, using the recipe (Andreas Efstathiou, 2007, private communication): M dust /M⊙ = kLir/L⊙, where k = 1.3x10 −3 for cirrus, 1.1x10 −4 for M82 and 4.4x10 −4 for A220. These dust masses have been given in the catalogue (section 5) and Fig 23R shows the dust mass, M dust , versus the stellar mass, M * , for SWIRE galax-ies, coded by optical SED type. For most galaxies M dust /M * ranges from 10 −6 to 10 −2 , with the expected progression through this ratio with Hubble type. Galaxies with exceptionally high values of this ratio presumably have gas masses comparable to the stellar mass, assuming the usual gas-to-dust ratios. The dust masses will be uncertain because the uncertainty in photometric redshifts makes luminosities uncertain by a factor ranging from 0.06 dex at z = 1 to 0.20 dex at z = 0.2 (see start of this section). There will also be a larger uncertainty associated with ambiguity in the template fitting. To estimate this we looked at all fits to the infrared excess where there is an excess in at least two bands, excluding cases with an AGN dust torus component, and with reduced χ 2 for the infrared fit < 5. Relative to the best fit dust mass we find an rms uncertainty of ±0.8 dex, or a factor of 6, so these dust mass estimates have to be treated with caution. This uncertainty is substantially reduced if 70 or 160 µm data are available. Data from HERSCHEL will allow these dust mass estimates to be greatly improved. CONCLUSIONS We have presented the SWIRE Photometric Redshift Catalogue, 1025119 redshifts of unprecedented reliability and good accuracy. Our methodology is an extension of earlier work by Rowan-Robinson (2003), Rowan-Robinson et al (2004, Babbedge et al (2004) and is based on fixed galaxy and QSO templates applied to data at 0.36-4.5 µm, and on a set of 4 infrared emission templates fitted to infrared excess data at 3.6-170 µm. The galaxy templates are initially empirical, but have been given greater physical validity by fitting star-formation histories to them. The code involves two passes through the data, to try to optimize recognition of AGN dust tori. A few carefully justified priors are used and are the key to supression of outliers. Extinction, AV , is allowed as a free parameter. We have provided the full reduced χ 2 ν (z) distribution for each source, so that the full error distribution can be used, and aliases investigated We use a set of 5982 spectroscopic redshifts taken from the VVDS survey (LeFevre et al 2004), the ELAIS survey , the SLOAN survey, NED, our own spectroscopic surveys , Trichas et al 2007 and Swinbank et al (2007), to analyze the performance of our method as a function of the number of photometric bands used in the solution and the reduced χ 2 ν . For 7 photometric bands (5 optical + 3.6, 4.5 µm) the rms value of (z phot − zspec)/(1 + zspec) is 3.5%, and the percentage of catastrophic outliers (defined as > 15% error in (1+z), is ∼ 1%. These rms values are comparable with those achieved by the COMBO-17 collaboration (Wolf et al 2004), and the outlier fraction is signif-icantly better. The inclusion of the 3.6 and 4.5 µm IRAC bands is crucial in supression of outliers. We have shown the redshift distributions at 3.6 and 24 µm. In individual fields structure in the redshift distribution corresponds to real large-scale structure which can be seen in the spectroscopic redshift distribution, so these redshifts are a powerful tool for large-scale structure studies. 10% of sources in the SWIRE photometric redshift catalogue have z >2, and 5% have z>3, so this catalogue is a huge resource for high redshift galaxies. The redshift calibration is less reliable at z >2 and high redshift sources often have significant aliases, because the sources are detected in fewer bands. We have shown the distribution of the mean extinction, AV , as a function of redshift. It shows a peak at z∼0.5-1.5 and then a decline to higher redshift, as expected from the star-formation history for galaxies. A key parameter for understanding the evolutionary status of infrared galaxies is Lir/Lopt, which can be analyzed by optical and infrared template type. For cirrus galaxies this ratio is a measure of the mean extinction in the interstellar medium of the galaxy. There appears to be a population of ultraluminous galaxies with cool dust and we have shown SEDs for some of the reliable examples. For starbursts Lir/Lopt can be converted to the specific star-formation rate. Although the very highest values of this ratio tend to be associated with Arp220 starbursts, by no means all ultraluminous galaxies are. There is a population of galaxies with elliptical SEDs in the optical and with luminous starbursts, and we have shown SEDs for four of these with data in all Spitzer bands and spectroscopic redshifts. For 3 of them the IRAC colourcoded images show that the 8 µm emission is coming from the outer regions of the galaxies, suggesting that the star-formation is associated with infalling gas and dust. Figure 20. LH: Spectral energy distributions of luminous cool galaxies with good photometric redshifts (> 5 photometric bands, reduced χ 2 < 5) and 24 and 70 µm detections. In each case the infrared SEDs are fitted by cirrus and M82 starburst components, with the dominant luminosity coming from the cool cirrus component. RH: Spectral energy distributions of luminous infrared galaxies with elliptical galaxy SEDs in the optical, 24, 70 and 160 µm detections, and spectroscopic redshifts. . The specific star-formation rate φ * /M * , versus stellar mass, M * , for galaxies with (L) z<0.5, (R)1.5 < z phot < 2.5, colour-coded by optical SED type. Figure 23. L: The star-formation rate φ * , in M ⊙ yr −1 , versus redshift, colour-coded by optical SED type. R: The dust mass, M dust /M ⊙ versus stellar mass, M * , for SWIRE galaxies, colour-coded by optical SED type.
2008-04-05T16:33:29.000Z
2008-02-13T00:00:00.000
{ "year": 2008, "sha1": "1c2701f017422c0cd2af6ac99ac6bfbbf06ee750", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/386/2/697/3609092/mnras0386-0697.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "1c1920bd6784dd7e212e406aab95796c056a072b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230553430
pes2o/s2orc
v3-fos-license
How the COVID-19 pandemic affected hotel Employee stress: Employee perceptions of occupational stressors and their consequences This study sought to examine the impacts of the global coronavirus pandemic on hotel employees’ perceptions of occupational stressors and their consequences. Paired t-tests and structural equation modeling were applied to examine the responses of 758 hotel employees in the United States. The findings showed that occupational stressors after the outbreak of the pandemic consisted of three domains: traditional hotel-work stressors, unstable and more demanding hotel-work-environment stressors, and unethical hotel-labor-practices-borne stressors. The impacts of these stressors differed from the hypothesis that traditional hotel-work stressors positively affect job satisfaction and organizational commitment. The findings showed that job satisfaction and organizational commitment significantly explained job performance, subjective well-being, and prosocial behavior, but they did not significantly influence turnover intention. Hotel employees’ pre-pandemic perceptions of occupational stressors and their consequences also differed significantly from their perceptions after the pandemic had broken out. Introduction The novel coronavirus disease known as COVID-19 has caused severe consequences as a result of its rapid spread worldwide. Indeed, the latest number from the World Health Organization (World Health Organization (WHO), 2020) website as of 21 June 2020 reported more than 8.5 million cases worldwide, including approximately 456,973 deaths (https ://covid19.who.int/). The number of cases has been expanding globally, with critically alert situations demanding multiple emergency actions by government entities around the world . Many countries and cities are on complete lockdowns to prevent COVID-19 from spreading. One of the severely impacted industries is the hotel industry. For example, in the United States, the room occupancy rates of hotels plummeted to 39.4% in March 2020 (Statista, 2020). The deterioration of hotels' financial situations has wreaked havoc on employment and job security. Hotels have forced their staffs to take early retirement, be laid off, take unpaid leave, undergo a reduction in welfare benefits, and change their working shifts or positions (Edgecliffe-Johnson, 2020, March 18). Theses oppressive circumstances have fostered anxiety in employees about their work and have made them fearful for their employment future. Occupational stressors were identified in previous studies as one of the key predictors that negatively affect employee satisfaction, commitment, job performance, subjective well-being, prosocial behavior, and intention to stay (Darvishmotevali and Ali, 2020;Hwang et al., 2014;Kang et al., 2020;Kim et al., 2015;Yang and Lau, 2019). Hotel employees are in extreme states of anxiety and feel stressful to work at their workplaces during the COVID-19 pandemic. The grave situation of escalating occupational stress due to the detrimental impacts of the pandemic on all hotel employees, from frontline workers to management, motivated us to investigate the effects of the pandemic on occupational stressors and their consequences. Here, we viewed stress, which is an individual's physical or psychological response to unusual situations, as a common and essential part of life (Ivancevich and Matteson, 1980;Selye, 1976). According to the International Labor Organization (2020), however, employees must confront a huge challenge as they attempt to cope with the newly changing work environment created by the COVID-19 pandemic and its consequent impact on occupational stressors. This study aimed to examine the impacts of COVID-19 on hotel employees' perceptions of occupational stressors and their outcomes. More specifically, it sought to identify the factors affecting employees' occupational stressors after the outbreak of the COVID-19 pandemic. Second, it sought to assess the status quo of job satisfaction, organizational commitment, job performance, subjective well-being, prosocial behavior, and turnover intention. Third, it attempted to identify the structural relations among the concepts. Fourth, it sought to compare the hotel workers' perceptions of occupational stressors and their consequences, as influenced by the employees' sociodemographic and jobrelated variables. Last, it aimed to compare hotel workers' perceptions of the occupational stressors and their consequences before and after the outbreak of the COVID-19 pandemic. Occupational stress and stressors Research on occupational stress has long been a major focus for many hotel practitioners and academic researchers because of its significant impact on organizations (Ariza-Montes et al., 2018;Huang et al., 2018;Radic et al., 2020). For example, if an employee fails to cope with employment demands, conflict occurs between employees or between the employee and his/her job (Faulkner and Patiar, 1997). In addition, that conflict can provoke personal dysfunction that manifests in negative physiological and emotional responses in the workplace (Levi, 1981). Thus, occupational stress can be defined as "a particular individual's awareness or feeling of personal dysfunction as a result of perceived conditions or happenings in the work setting" (Parker and DeCotiis, 1983, p. 161). Because occupational stress is viewed as one of the most important challenges of human resource management, many researchers have sought to identify the impacts of occupational stress in the hospitality industry. Some studies have indicated that occupational stressors enhance hotel employee's turnover intention (Hwang et al., 2014;Tongchaiprasit and Ariyabuddhiphongs, 2016). Other studies have shown that occupational stress reduces employee job satisfaction (Hight and Park, 2019;Yousaf et al., 2019) and job performance (Abdelhamied and Elbaz, 2018;Akgunduz, 2015). Therefore, it is meaningful and important to examine the dimensionality of occupational stressors and their impacts on internal consequences in the hotel industry. Relationship of occupational stressors to job satisfaction Job satisfaction is defined as the "pleasurable emotional state resulting from the appraisal of one's job as achieving or facilitating the achievement of one's job values" (Locke, 1969). Put differently, it is a judgment of the perceived relationship between employees' expectations from their work and the perceived offering they receive (Lund, 2003). Indeed, job satisfaction is a significant internal goal of every organization (Amissah et al., 2016). Studies have found that occupational stress is a key predictor of employees' negative emotional outcomes, such as job dissatisfaction (Barsky et al., 2004;Dartey-Baah et al., 2020). In the literature on the hospitality industry, Kim et al.'s (2015) study indicated that occupational stressors, including role conflict and role ambiguity, were negatively associated with job satisfaction. In a study by Yousaf et al. (2019) that examined the impact of occupational stress and the effects of work-social support on the outcome of that stress, occupational stress was found to be the most influential factor mitigating employee satisfaction. That conclusion has been found consistently in other hospitality and tourism studies (Chan et al., 2015;Cheng and Yi, 2018). Therefore, we proposed the following hypothesis. Hypothesis 1. Employees' occupational stressors negatively affect their job satisfaction. Relationship of occupational stressors to organizational commitment Organizational commitment comprises a large area of organizational perceptions, incorporating not only job-level perceptions but also explicitly including the organizational characteristics to which individuals attribute their emotional attachment, involvement, and continuance in the organization. Hotel employees' cohesive contacts with customers make them particularly prone to experiencing occupational stress (Wetzels et al., 1999). In accordance with social exchange theory, hotel employees who labor in an unpleasant work environment that is characterized by high occupational stress have a reduced likelihood of becoming involved with and emotionally attached to the hotel of their current work (Tiyce et al., 2013). Two recent hospitality-industry studies (Garg and Dhar, 2014;Yang and Lau, 2019) have confirmed this argument, with the frontline hotel workers claiming emotional and physical stress and burnout because of customer incivility. Such stress can lead to apathy at work and unwillingness to be part of a team or a hotel (Lee and Mathur, 1997). On the basis of all of these findings, we established the following hypothesis. Hypothesis 2. Employees' occupational stressors negatively affect their organizational commitment. Relationship of job satisfaction and occupational commitment to turnover intention, subjective well-being, and prosocial behavior Job performance is defined as employees' performed activities and behaviors that contribute to an organization's goals, including the delivery of tangible services (e.g., hotel check-in and check-out) and intangible services (e.g., guest relations) (Ieong and Lam, 2016). In addition to employee job performance, subjective well-being has also received attention in the extant hospitality literature, through efforts to reveal the cognitive and emotional evaluations of hotel employees' lives (Wang et al., 2020). Life satisfaction is a crucial issue in employees' subjective well-being because of its close relationship with life success (Diener et al., 2002). Prosocial service behavior refers to employee behaviors that are helpful to other individuals, groups, or organizations. Prosocial behavior in this study refers to individual social-altruism and voluntary behaviors that are intended to benefit another in society (Eisenberg et al., 2015). Turnover intention can be defined as employees' expression of their intention to quit an organization and to seek another job (Tett and Meyer, 1993). High turnover rate of hotel employees has become a main feature of the hotel industry. Previous studies have indicated that occupational stress leads to negative job satisfaction (Hight and Park, 2019;Yousaf et al., 2019). Moreover, stressed employees exhibit a weak commitment to the workplace (Garg and Dhar, 2014). In a psychological study, Yousef (2000) proposed that employees who are highly committed to their organizations and satisfied with their jobs will exhibit high job performance. This relationship has been tested and validated in recent hospitality and tourism studies (Aydın and Kalemci Tüzün, 2019;Koo et al., 2019). Based on the strong connection between job satisfaction and life satisfaction, some studies (Lee et al., 2016;Yurcu and Akinci, 2017) sought to identify and support the positive association between job satisfaction and subjective well-being in the hospitality industry. In addition, Polo-Vargas et al. (2017) identified an indirect link between organizational commitment and life-satisfaction through employee engagement. High turnover rate is an emergent challenge for hotel businesses. Previous studies have identified that high levels of perceived occupational stress are associated with high levels of turnover intention (Koo et al., 2019;Wen et al., 2020). Moreover, negative associations have been identified between job satisfaction and turnover intention and between job commitment and turnover intention (Hsiao et al., 2020;Kim et al., 2017). More recently, hospitality and tourism scholars have extended their research focus from organizational outcomes to societal outcomes, such as prosocial behavior. Studies have suggested that employees who are relatively more satisfied with their workplace and more committed to it tend to join voluntary activities more frequently (Isen and Baron, 1991) and engage more often than average in social networking (Brissette et al., 2002), although those studies did not explicitly test the relationships between job satisfaction, job commitment, and prosocial behavior. Thus, the following hypotheses are proposed. Comparison of occupational stressors and other consequences according to hotel employees' sociodemographic and job-related variables Previous studies have suggested that hotel employees' occupational stressors can be influenced by various sociodemographic and job-related variables, such as gender, position level, age, department, and hotel type (Herrero et al., 2012;Wireko-Gyebi and Ametepeh, 2016). For example, Herrero et al. (2012) suggested that women initially have higher stress levels than men do. Some studies have found that managerial hotel employees tend to experience greater stress because their job duties include handling complaints from demanding customers (Karakaş and Tezcan, 2019;Lee and Shin, 2005). To accomplish sustainability within hotel human resource management, age is the most dominant variable for young employees, who are more willing to change jobs (Vetráková et al., 2019). In Aydin's (2018) study, hotel employees in different departments showed various levels of occupational stress because their job duties differed, even though they worked in the same hotel. Karatepe and Uludag (2008) compared the roles of job stress, burnout, and job performance among hotel employees between independently owned/family-owned hotels and chain hotels. Their results indicated that employees who were working in independently owned/family-owned hotels demonstrated a higher degree of emotional exhaustion and depersonalization than employees of chain hotels did. Thus, the above-discussed studies prompted the following hypothesis. Hypothesis 5. The magnitude of occupational stressors and employeeassociated outcomes will differ in accord with hotel employees' sociodemographic and job-related variables. Comparison of occupational stressors and their consequences before and after the onset of the COVID-19 pandemic The hotel and tourism business is one of the largest and most rapidly growing industries, but it is extremely vulnerable. The negative impacts of health-related risks can be devastating and enduring (Rosselló et al., 2017). The major impact of health-related risks on tourism is a decrease in inbound tourist demand, and that impact extends to the level of a dependence on a health-related disease pandemic area (Yang and Chen, 2009). Although the actual economic losses of health-related diseases in the tourism sectors depend on their relative contributions to the national economy, travel and trade restriction measures can create significant economic losses for an affected area (Huang, 2009;Smith, 2006;Otoo and Kim, 2018). A health-related disease generates political conflict, such as discrimination against races and nationalities, entry bans, and strict quarantine measures (Curley and Thomas, 2004). Although previous studies have provided significant contributions to our comprehension of the macro-level outcomes caused by healthrelated risks, only a few studies have attempted to examine the microlevel employee-associated outcomes caused by health-related disease. Hotel operations may require their employees to take unpaid leave, reduce their working hours, change their employment status, reduce their salary, and forego their overtime compensation (Chaturvedi, 2020, April 09). Hotel employees become extremely anxious when they lose faith in the future of the hotel industry. In addition, endless cost-saving measures can destroy the satisfaction, commitment, and loyalty of employees (Wang et al., 2018;Wong and Li, 2015). Therefore, it is assumed that employee perceptions of occupational stressors will be different before and after the COVID-19 pandemic outbreak, and we proposed the following hypothesis. Hypotheses 6. The magnitude of occupational stressors and employeeassociated outcomes will be different before and after the COVID-19 pandemic. Methods The measurement items for the final survey were developed through a thorough literature review, in-depth interviews, and pilot surveys. The twenty-three items used to measure the attributes of occupational stressors were adopted from previous studies (Hwang et al., 2013(Hwang et al., , 2014Tongchaiprasit and Ariyabuddhiphongs, 2016). To ensure the content validity of the items that we derived from the literature review and to identify new items that we might have missed, we conducted in-depth interviews with five hotel managers and 10 hotel employees. Eight other items were added to the scale on the basis of the situation of the COVID-19 pandemic. For example, "forceful advanced annual leave," "demand of replacing the job duties for other departments (e.g., buffet restaurant, guest relation)," and "frequent reporting/documentation about the hygiene issues." In addition, a pilot test was conducted with 50 hotel employees through online panel survey to purify the measurement items. A total of 31 items were used to measure the construct of occupational stressors. The items that we used to manifest job satisfaction (four items) were derived from previous studies (Babin and Boles, 1998;Netemeyer et al., 1996). Four items to indicate organizational commitment were also drawn from a previous study (Kucukusta et al., 2016), whereas three items that manifest turnover intention were extracted from a study conducted by Netemeyer et al. (1996). Four items related to job performance were extracted from previous literature (Griffin et al., 2007). Five items that addressed subjective well-being were extracted from previous literature (Diener and Fujita, 1995;Zhao et al., 2016). Finally, items indicating prosocial behavior (three items) were selected from previous research (Gagné, 2003;Twenge et al., 2007). All of the items were measured using a seven-point Likert scale ranging from strongly disagree (1), neutral (4) to strongly agree (7). The sample for this study was composed of hotel employees in the United States. A self-administered online panel survey was conducted through online panel companies to select targeted nationwide samples and to consider cost and time effectiveness (Granello and Wheaton, 2004). The main survey was executed from 28 April to 21 May 2020 and comprised three screening questions that requested information on current employment status, working experience in hotels, and awareness of the pandemic outbreak. Respondents were asked to evaluate their perceived occupational stressors and consequences on the basis of preand post-COVID-19 pandemic. Ultimately, those procedures resulted in a collection of 800 questionnaires. Responses from employees who had been working for a hotel for less than one year were eliminated from the list of respondents. To trace insincere answers, profiles for the number of work years, age, work position, and work department were compared for every respondent. As a result, 42 questionnaires were removed because they were believed to contain untrustworthy responses, including having only one number checked throughout the entire questionnaire, the survey having been completed within two minutes, and report of a high employment position despite the respondent's young age. Consequently, a total of 758 respondents were accepted for further data analyses. Profiles of the respondents According to the results of the frequency analysis, 63.7% of the respondents were males. Categories of age groups, in group-size order, were 30 s (43.7%), 20 s (28.1%), 40 s (20.4%), and 50 s (7.8%). In terms of educational level, approximately 60.6% of the participants had a university degree. A majority of respondents were working at a supervisory level (39.3%), while 32.8% were at a managerial level. Slightly more than half (55.1%) of the participants worked for independent, privately owned hotels, while 44.3% of the respondents worked for chain-brand hotels. About 71% of them were working in front-of-house departments, whereas 28.1% of them were working in back-of-house departments. In regard to duration of work in the hotel industry, the largest group was that of individuals who had worked in hotels for four to nine years (51.1%), followed by the group who had worked for one year to three years (25.3%), and finally the group who had worked for 10 years or longer (23.6%). The locations of the respondents' work residence were Texas (12.0%), New York (11.5%), California (11.3%), Florida (6.2%), and Pennsylvania (4.4%). The respondents reported that their work hotels' room occupancy rate after the COVID-19 outbreak was 40.4%, compared with a room occupancy rate before COVID-19 of 71.3%. Further detailed profiles are provided in Table 1. Exploratory factor analysis of the measurement model (first half of the data set, n = 379) The data collected were randomly split into two data sets for crossvalidation (Kline, 2016). An exploratory factor analysis (EFA) with principal-axis factoring and promax rotation was conducted for the first half of the data set (n = 379). As Table 2 shows, items with communalities below 0.4 and factor loadings of less than 0.4 were considered for removal (Stevens, 1992). Factors were selected if their eigenvalues were greater than 1.0. The reliability alphas for all of the domains ranged from 0.86 to 0.94. Finally, the 24 items that were generated showed a three-factor solution. The three extracted domains of occupational stressors were labeled "traditional hotel-work stressors," "unstable and more demanding hotel-work-environment stressors," and "unethical hotel-labor-practices-borne stressors." Other constructs generated a single-factor solution. Confirmatory factor analysis of the measurement model (second half of the data set, n = 379) A confirmatory factor analysis (CFA) was applied to the second half of the data set (n = 379), to confirm the factor structure that had been identified from the EFA. The results of the CFA indicated a satisfactory level of fit for the overall fit indices (χ 2 (1000) = 1723.63 (p < 0.001), CFI = 0.95, TLI = 0.94, RMSEA = 0.04, GFI = 0.84). The standardized factor loading of each item ranged from 0.64 to 0.82, thus exceeding the threshold value of 0.5. All average variance extracted (AVE) values and construct reliability values were higher than 0.5 and 0.85, respectively, thus supporting convergent validity. In addition, the square roots of the AVE values for each construct were greater than the correlation coefficients for the corresponding inter-constructs, thus demonstrating discriminant validity. Structural equation modeling In Table 3, the results of our structural equation modeling (SEM) demonstrate a satisfactory level of fit for the overall fit indices (χ 2 (1034) = 3350.36 (p < 0.001), CFI = 0.91, TLI = 0.90, RMSEA = 0.05, GFI = 0.85). We examined a total of 14 direct relationships in this study, and the results supported 10 of those 14 hypotheses. Hotel employees' perceptions of occupational stressors and consequences before and after the COVID-19 pandemic outbreak Hypothesis 6 was tested by examining the difference between hotel employees' occupational stressors and their consequences before and after the COVID-19 pandemic outbreak. A significant difference between the before-outbreak and after-outbreak values was observed at the .001 level for the two new occupational stressors and their consequences. Thus, Hypothesis 6 was supported. Table 5 shows that the traditional-hotelwork stressors, such as excessive workload, long working hours, work demands on private life, repetitive work, lack of time with family, and poor cooperation with other staff/departments, were statistically higher before the onset of the pandemic than after it had taken hold. In contrast, both the unstable and more demanding hotel-workenvironment stressors and the unethical hotel-labor-practices-borne stressors were statistically lower before the onset of COVID-19 than they were after the pandemic had taken root. In addition, hotel employees' attitudes and behaviors were statistically different before the onset of the pandemic than they were after it. Table 5 shows that job satisfaction, organizational commitment, job performance, subjective well-being, and prosocial behavior had each significantly decreased after the pandemic took hold, whereas turnover intention was significantly higher after COVID-19 had become quite prevalent. The detailed information is visually showcased in Fig. 1. Discussion The results of this study indicate that hotel employees who had high perceived levels of traditional hotel work stressors still experienced positive job satisfaction and organizational commitment. This result differs from our expectation, which was based on a number of previous studies that had shown that employees' occupational stress was likely to reduce their job satisfaction and organizational commitment (Chan et al., 2015;Tiyce et al., 2013;Yousaf et al., 2019). However, those earlier studies did not consider an unpredicted economic recession, which likely affected our results. As a result of the coronavirus pandemic, the underemployment rate has surged and hotel employees' incomes have been substantially curtailed by a reduction in staff welfare. It may be that in our study, the hotel employees were willing to ignore the traditional hotel-work stressors during a global economic crisis because those stressors were compensated for by the employees' ability to still earn income for their livelihood in the midst of a time of slashed employment. Perhaps even more importantly, it may be that having such stresses signified an effort by the hotel to stand shoulder to shoulder with its employees to ride out the current difficult times, and consequently such employer support generated job satisfaction and organizational commitment. This study also identified two new domains of hotel occupational stressors (unstable and more demanding hotel-work-environment stressors, and unethical hotel-labor-practices-borne stressors) that occurred after the COVID-19 pandemic had created an extreme state of anxiety and had lowered job satisfaction and organizational commitment. These results are confirmed by previous studies that demonstrated the negative effect of occupational stress on employees' attitude (Cheng and Yi, 2018;Kim et al., 2015;Yang and Lau, 2019). Second, the effects that job satisfaction and organizational commitment exert on employee behavior have already been demonstrated (Aydın and Kalemci Tüzün, 2019;Brissette et al., 2002;Yousef, 2000;Yurcu and Akinci, 2017) and shown to reflect the original idea of the social exchange theory, which states that job satisfaction and organizational commitment are positively associated with hotel employees' constructive behaviors (Garba et al., 2018). Nevertheless, the findings of this study are inconsistent with previous studies in which job satisfaction and organizational commitment were negatively associated with turnover intention (Hsiao et al., 2020;Kim et al., 2017;Koo et al., 2019;Wen et al., 2020). Some hotel employees might feel that quitting their job is not an ideal option because during times of imminent economic risk it is extremely difficult to find a new job with the same remuneration package. Therefore, the hotel employees in our study who reported a low level of job satisfaction and organizational commitment did not necessarily have a higher turnover intention. Third, hotel employees' sociodemographic and job-related variables played a significant role in the respondents' perceived occupational stressors and their consequences pre-and post-COVID-19 outbreak. In our study, the above-age-40 managerial-level employees showed a higher job satisfaction and organizational commitment than the entry and supervisory employees did, even though they also had a higher level of perceived occupational stress. Two feasible explanations exist. First, older-age managerial employees are more likely to enjoy their job and consider their current employment to be a long-term career through which they can achieve self-accomplishment, such as enhanced opportunities for career development (Lu et al., 2016). Second, older-age managerial employees are more experienced than their younger counterparts are in managing stressful situations, which could explain their higher satisfaction and job commitment even in a situation of higher occupational stressors. In addition, this study's respondents who were working in independent, privately owned hotels exhibited stronger job satisfaction, commitment, and prosocial behavior than their chain-employed counterparts did, which is inconsistent with the findings of a previous study (Karatepe and Uludag, 2008). The most plausible explanation for that difference according to hotel type is that chain hotels have to follow strict standards and guidelines issued from their international corporate offices, whereas the employees who work in independent, privately-owned hotels enjoy flexible policies, and that situation can easily create the sense of employees sharing life's ups and downs with the hotel business owners. Fourth, it is important to note that the traditional hotel-work stressors decreased significantly after the onset of COVID-19, meaning that after the outbreak of the pandemic, hotel employees reacted less sensitively to the traditional hotel-work stressors. The most plausible explanation for that change is that the hotel business was critically affected by stringent restrictions on tourist movements, and also by several social distancing measures, such as shelter-in-place orders, travel restrictions, bans on large social gatherings, and closed entertainment venues (Courtemanche et al., 2020). For example, permanent hotel employees were compelled to accept unpaid leave, while temporary hotel employees were forced to cut back on their working hours (Edgecliffe-Johnson, 2020, March 18). Unstable job security and paranoia about their work environment, such as the prospect of immediate joblessness, reduced pay, or a change of work department, undoubtedly helped current staff appreciate their jobs despite also perceiving traditional hotel-work-environment stressors. Academic and practical implications This study's findings have important academic implications. First, this research was novel in revealing new occupational stressors and their effects on hotel employees after the COVID-19 pandemic outbreak. In addition, this was the first empirical study in the hotel industry that compared hotel employees' occupational stressors and their consequences before and after the onset of the COVID-19 pandemic, and that investigated the relationships between those stressors and their consequences and the employees' sociodemographic and job-related variables. Second, this study suggests a new factor/domain structure for occupational stressors. Previous studies indicated a six-dimensional framework of occupational stressors that pertain to conflicts with home life, difficult tasks and unsatisfactory pay, conflicts arising from job responsibility, unfair treatment, a lack of support, and the organizational culture (Hwang et al., 2013(Hwang et al., , 2014. However, in the current study we loaded those items onto one single factor that we labeled traditional hotel-work stressors. We then identified two new domains of occupational stressors: unstable and more demanding hotel-work-environment stressors, and unethical hotel-labor-practices-borne stressors. Third, this study revealed that the traditional hotel-work stressor domain positively affected job satisfaction and organizational commitment as a reflection of the special situation in which most employees are fearful. However, our findings supported the notion that stressors can be positive factors for determining an enhancement of job performance and motivation to work hard (McGowan et al., 2006). This study also has several meaningful practical implications. First, it showed how clearly essential it is to identify employee stressors. In our findings, unstable and more-demanding hotel-work-environment stressors received the highest score of occupational stressors after the onset of the COVID-19 pandemic. Therefore, hotel management should identify and consider diverse remedies for alleviating such occupational stress. For example, hotel management must communicate with its employees about the hotel's situation, abide by their own promises, and simplify the documentation process through an electronic checking system. Second, unethical hotel-labor-practices-borne stressors had the second-highest post-COVID-19 outbreak stressor score, thus highlighting the importance of organizational norms and fulfillment of hotel employees' expectations. Even though cost-saving measures may be inevitable, hotel management must consider the hotel employees' psychological perceptions and reactions to situations of insecure employment. For example, before taking unfavorable actions, hotel management needs to approach its internal customers using effective communication messaging that thoroughly explains the hotel's emergent financial situation and prospects and that solicits their understanding. Third, the respondents ranked traditional hotel-work stressors below the other two stressor domains. This finding accompanies the fact that after the onset of the COVID-19 global health risk, the traditional hotelwork stressors were positively associated with both job satisfaction and organizational commitment. A logical explanation would be that hotel employees were grateful to have a job and therefore accepted the conventional stresses, such as long working hours, excessive workload, and repetitive work. Thus, hotel management should make serious efforts to help employees weather the unprecedented situation through job sharing, changes in work shifts, changes in work departments, training, and competency development. Fourth, job satisfaction and organizational commitment did not explain the low turnover intention following the onset of the COVID-19 pandemic. That may be explained by the fact that hotel employees are more fearful of job security than they are motivated by job dissatisfaction or weak organizational commitment. Furthermore, job satisfaction and organizational commitment are still important predictors of employees' behavior, such as job performance, subjective well-being, and prosocial behavior. Therefore, hotel management must develop and quickly provide relevant stress-management programs, such as mentoring, reading of humanity books, consultations, team building, stress-release workshops, and outings. Finally, the perceived occupational stressors and their consequences varied across the employees' sociodemographic and job-related variables. This finding is important because hotel management will need to offer a variety of stress-relief programs that address the features associated with the most influential variables. For example, in the comparison of the stress levels before and after the onset of the pandemic, females, seniors, and managerial staff all showed more-elevated levels of stress than their counterparts did. Therefore, management will need to care for the mostaffected groups, and in particular, for senior employees who are concerned about retirement and family obligations. Conclusions and suggestions for future study The COVID-19 pandemic has caused severe financial deterioration in the hotel industry, and the ecosystem of hotel human resources has been greatly affected. Even more important is the fact that the structure of occupational stressors has changed. After the onset of the COVID-19 pandemic, we identified the existence of three domains of occupational stressors: traditional hotel-work stressors, unstable and more demanding hotel-work-environment stressors, and unethical hotel-labor-practicesborne stressors. Traditional hotel-work stressors turned out to be a positive predictor of job satisfaction and organizational commitment, whereas the other two stressors were negatively associated with job satisfaction and commitment. In addition, job satisfaction and organizational commitment positively affected job performance, subjective well-being, and prosocial behavior. On the other hand, job satisfaction and organizational commitment were no longer predictors of turnover intention. In addition, occupational stressors and their consequences were found to exert significantly different influences pre-COVID-19 versus post-COVID-19 outbreak, in association with the employees' sociodemographic and job-related variables. This finding provides important practical implications to hotel management for how to handle the changing ecosystem of hotel human resources. This study is involved with some limitations. First, it depends on hotel employees' self-report that is reliant on memory. Now that they must evaluate their perceived occupational stressors before and after the COVID-19 pandemic, memory decay can incur accurate response. However, the limitation can be mitigated because it was a gap of about two months between the spread of the pandemic in the United States and the survey time. Meanwhile, conducting a longitudinal analysis is suggested to validate the results of this study. Second, the data were collected only in the U.S., where the largest number of confirmed cases of COVID-19 were reported. A future study will need to use data from other countries in a comparison of the effects of the pandemic on hotel job security. Furthermore, a future study will need to conduct in-depth interviews with employees to identify latent psychological factors that could be influential, because our questionnaire was limited to include individually peculiar items. Finally, because this study dealt with a current situation of unstable employment conditions, future research should continue to identify substantial long-term plans and systems for employment and job security.
2020-12-09T14:07:21.185Z
2020-12-05T00:00:00.000
{ "year": 2020, "sha1": "d51119e15c10f9112300d3a3f7273f71c79854a4", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "de8438e54f84abb842a51e2e5c158296d9364464", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
248496716
pes2o/s2orc
v3-fos-license
Unsupervised Denoising of Optical Coherence Tomography Images with Dual_Merged CycleWGAN Nosie is an important cause of low quality Optical coherence tomography (OCT) image. The neural network model based on Convolutional neural networks(CNNs) has demonstrated its excellent performance in image denoising. However, OCT image denoising still faces great challenges because many previous neural network algorithms required a large number of labeled data, which might cost much time or is expensive. Besides, these CNN-based algorithms need numerous parameters and good tuning techniques, which is hardware resources consuming. To solved above problems, We proposed a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for retinal OCT image denoiseing, which has remarkable performance with less unlabeled traning data. Our model consists of two Cycle-GAN networks with imporved generator, descriminator and wasserstein loss to achieve good training stability and better performance. Using image merge technique between two Cycle-GAN networks, our model could obtain more detailed information and hence better training effect. The effectiveness and generality of our proposed network has been proved via ablation experiments and comparative experiments. Compared with other state-of-the-art methods, our unsupervised method obtains best subjective visual effect and higher evaluation objective indicators. Introduction OCT is a technique proposed by Huang et al [1], which is for high-resolution tomography of the internal microstructure of biological tissue based on the low coherent properties of light. The technique uses the Michaelson interferometer to complete coherent selection, spatially 2D or 3D scanning of biological tissue, which is a high-speed tomography technique for non-intrusive biological tissue [2]. At present, not only in ophthalmology, dentistry and other clinical diagnosis, but also in the field of industrial testing, this technique has been widely applied. However, OCT is a high-resolution imaging technology, which requires stable signal acquisition process. It will inevitably receive noise pollution, while causing structure blur and distortion, adversing to the accurate judgment of subsequent images. Specifically, OCT imaging is based on coherent detection technology, which makes noise the primary cause of noise [3]. Although image technology and related equipment has been continuously developing, the problem of noise has not been solved well and it has seriously affected the automatic diagnosis of OCT images, such as registration [4], retinal lesion region segmentation [5], and retinal layer information analysis [6]. Therefore, how to denoise OCT image is a primary task to improve the performance of automatic diagnostic performance. Recently, many methods have been proposed for OCT image denoising. These methods can be divided into two aspects: hardware based and software based. Hardware based algorithms is mainly about improvements to the imaging system. However, these algorithms is not quite useful because it require specially designed acquisition systems and thus not suitable for commercial useage. Software based algorithms is mainly about processing of digital image signal after OCT imaging. In the traditional software based methods, wavelet transform-based methods [7] are widely used for OCT image denoising. These methods decompose OCT image into images in different frequency bands by wavelet transform, in which the colored noise is distributed in the high frequency component and the white noise is distributed in the low frequency component, The high frequency details are omitted or weighted, and the clear image can be synthesized after the image components are reconstructed. After that, A new wavelet transform method based on the combination of wavelet transform and Wiener filter [8] is proposed. This method decomposes the image into four different frequency bands by wavelet transform, and does not change in the low-frequency part, and uses Wiener filter in the high-frequency part. However, these methods show a certain degree of overfitting. Bo and Zhu [9]proposed wavelet modification based block matching and 3D filtering (BM3D) for OCT image denoising. After combining the advantages of traditional spatial domain and transform domain denoising algorithms, a DDID algorithm is proposed [10], which can effectively remove the additive Gaussian white noise of graphics. There are some other methods to deal with this noise. The total variation approximation method is applied to the multiplicative noise model [11]. This method uses a constrained optimization approximation with two Lagrange multipliers to build the model, but the fitting term is non convex, So Yang et al. Used the first-order primal dual algorithm [12] to deal with the image with speckle noise, the result is very good to retain the image details, and the effect is better than the total variation method. However, these methods have some shortcomings such as can't capture enough image features, difficult to choose the right thresholds. With the development of deep learning, the method based on convolutional neural network [13] has greatly improved the image denoising, in which the stacked sparse autoencoder is applied to natural image denoising [14]. Hu Chen et al Proposed a residual encoding decoding for low-dose CT image denoising CNN [15]. Ma et al considered image denoising as the problem of image to image conversion, and proposed Speckle noise reduction in optical coherence tomography images based on edge-sensitive CGAN [16] . However, All of the above deep learning methods belong to supervised learning, and all of them need labels corresponding to images to carry out experiments. To solve above problems, we proposed a newly unsupervised method which based on our proposed double-model Cycle-Consistent Adversarial Networks together with image merge(fusion) called Dual-Merged Cycle-WGAN. This method can learn a lot of image mapping from SD-OCT to EDI-OCT(enhanced depth imaging optical coherence tomography) unsupervisedly and can achieve remarkable performance on denoising OCT image only using a small mount of unlabeled image data. CycleGAN CycleGAN is an image-to-image translation based on GAN, which defined generator: G : X T Y and F : Y T X discriminator DX distinguishes between x and F(y), and DY distinguishes between y and G(x). CycleGAN introduces the idea that "if we translate from one domain to another and back again we should arrive where we started" [17] . The objective function of CycleGAN consists of two types of loss. The adversarial loss evaluates the distance between the distribution of the generated image and the real image. Cycle consistency loss defined as F(G(X)) M X, and G(F(X)) M y, which calls the cycle consistency. Based on the cycle consistency, only the source data set and the target data set need to be used in the training process, and there is no need to have a one-to-one mapping relationship between the data, which can solve the problem of not being able to obtain or difficult to obtain a paired data set. In CycleGAN, we have two adversarial losses: [ lo gDX ( x )] The cycle consistency loss is defined by where 入 controls the relative importance of the cycle consistency loss. In the training phase, the parameters in G,F,DX , and DY are estimated by optimizing the full objective function and we get G*,F* = arg min max L(G, F, DX, DY) G,F DX ,DY CycleGAN as an image conversion method has important applications in fields such as photo enhancement, image coloring, style transfer, etc. WGAN GAN has achieved great success in image translate after it been put forward. However, it has problems such as difficulty in training and insufficient diversity of generated results. Wasserstein GAN(WGAN [18]) made some improvement of the problems of GAN. Arjovsky et al. stated that the difficulty in trainnning of GAN is due to the poorly designed loss function. Many loss functions commonly used in GAN, such as JS divergence, are locally saturated, which will cause the problem of gradient disappearance. Therefore, they proposed the Wasserstein distance with better continuity and differentiability. Suppose the distribution of the real image and the generated image are Pr and Pg. The Wasserstein distance between Pr and Pg defined by supremum is taken over all 1 -Lipschitz function / : X T R and x is a tight metric space. In the context of GAN, function / Corresponds to discriminator D(x) and objective function of WGAN becomes: where D is the set of 1 -Lipschitz functions and Pg is the model distribution defined by x = G(z), z~p(z), where p(z) is some simple noise distributions, such as uniform distribution or Gaussian distribution. The 1-Lipschitz constraint on the discriminator is through clipping. The weight of the discriminator is located in a compact space [-c, c]. In the final algorithm, WGAN has four changes relative to GAN: • Remove sigmoid in the last layer of the discriminator • The loss of generator and discriminator does not take log • Every time the parameters of the discriminator are updated, their absolute value is truncated to no more than a fixed constant c • Replaces the optimizer Adam with RMSProp Overcview In this article, we proposed a newly unsupervised method which based on our proposed Dual-Merged Cycle-WGAN for OCT image denoiseing, which has remarkable performance with less unlabeled traning data. We use two Cycle-GAN networks combining with image fusion technique. Specifically, let us assume the set of original noise and clean OCT images are X*, Y*, two Cycle-GAN networks Mi, M2, and their generator is Gi, G2 respectively. Fisrt, we randomly crop original noise and clean images into small pieces, each consisting of 2100 pictures. Then, apply normalization on these data. Denote these processed data by X for noise and Y for clean. We use X and Y to train our first model Mi . The output of the Mi is merged with original image via linear combination and plus samesized random img which generate from standard normal distribution. The result above is taken as input of M2 and the output of M2 is our predict image. More formally, let x e X, the computation procedure is described as followed: Where a = 0.8, b = 0.2, z -N(0,1), y = 62(x2) is the output of M2. We use x2 and y to train our model M2, and take y as final predict result. We also imporve the stucture of two Cycle-GAN networks. • Improve the original U-Net with newly Multi-U-Net • Use the Wasserstein loss instead of the orignal loss of Cycle-GAN network To evaluate denoising performance, we use four evaluation indicators: SNR, ENL, PSNR, SSIM. Data Augmentation Our experiments use the dataset which contains 21 noised pictures and corresponding clean pictures(each of size 360 x 800 and clean pictures only for metrics) which is not enough for the training of Our Network. Hence, we preprocessed and augmentation our dataset by the following steps: 3.2.1. Adjustment for the contrast. By Adjusting the contrast of the clean pictures, we can sharpen the edge of lines , which is more conducive to clarity the original pictures. Magnify. To get more detailed features, improve network efficiency and generalize model's performance, we magnify our orginal image by three times. Cut. The original images is too large but the amount is too small, which is not suitable for network's training. Therefore, we uniformly cut the image into 256 x 256 images, which generate 2100 images for our model training. Dual-Merged Cycle-WGAN Our model consists of two Cycle-GAN networks with imporved generator and descriminator which both using Multi-U-Net. Multi-U-Net: a generator for improving Cycle-GAN U-Net [19] is a structure commonly used in the field of medical image processing, which is also used by the generator in the normal Cycle-GAN. The structure of U-Net make it easy to capture the detailed information behind images. Moreover, U-Net can localize the segmented parts of the image. Motivated by these good properties of U-Net, we proposed one with a bit more complex structure called Multi-U-Net. First, divide the input into multiple parts, put them into multiple U-net with different depth in turn. The input image is divided with different scalar according to different depth of downsampling. After that, data are trained through the block layer. And then we use upsampling to unify the scales of different scales of the block data to the same size as the input image. In this way, different depths of this U-Net Structure can learn image features with different dimensions. Finally merging the above results of multiple U-Nets to get the Multi-U-Net results. The visual procedure can be found in figure 1 The results are combined with the hideen feature information of multiple dimensions, and thus better noise reduction performance. Nesides, increasing the number of U-Net with different depths is similar to increasing the width of a single layer of one fully connected network. In this way, we can enhance the network's image learning ability. For better denoising performance, we introduce the image merge technique between two Cycle-GAN networks. Multilevel Cycle-GAN: structure of network for training As previously discussed, single Cycle-GAN can achieve a better noise reduction effect. However, Cycle-GAN is a Image-to-Image-Translation network [17], that is, we can use Cycle-GAN to convert noise images into clear images, or input clear images into the network to get noise images, The greater the difference between the noise image and the clear image, the more difficult the network training will be, and the worse the effect will be. Thus, our model use image merge between two Cycle-GAN networks to obtain more detailed information and hence better training effect. The figure 2 below is part of the structure we designed. Loss function To improve the training stability of Cycle-GAN, we extended the idea of WGAN and improved WGAN to Cycle-GAN. LGAN(F,DX ,Y,X) = Ex-px(x) [ logDx (x)] The cycle consistency loss is defined by Combined with cycle consistency loss, our full objective for CycleWGAN is: Experimental data The study is approved by the Institutional Review Board of Sichuan University, and informed consent was obtained from all subjects. As mentioned before, our experiment is based on 21 noise and 21 clear images of SD-OCT with a size of 360 x 800 pixels. For the comparative experiment, we use the method in section 3.5 to divide 21 pictures into 2100 pictures with 256 x 256 pixels, of which 1890 are used as the training set and 210 as the test set. All subsequent comparative experiments use this data set to ensure the fairness of the experiment. For ablation experiments, we use the same data expansion method to generate 2500 training sets and 500 test sets. This data set is used in each group of ablation experiments. Data expansion is one of the effective strategies to increase the diversity of data distribution and alleviate the over fitting problem. In this paper, we expand the data set by enlarging the pixels of the picture and then randomly cutting. Increased the amount of data by 100 times. Evaluation Metrics In order to evaluate the denoising performance of different methods, four indexes including Signal-to-noise ratio (SNR), Equivalent numbers of looks (ENL), Structural similarity index measure (SSIM) and Peak signal-to-noise ratio (PSNR) were used to analyzed experimental results quantitatively. These four indicators are calculated as followed. SNR. the Signal-to-noise ratio(SNR) is a global performance measure which has been widely used to evaluate denoising performance when the reference clean images are not available [20] [21]. The SNR can be calculated as: where max (I) is the maximum possible pixel value the denoised image, a is the standard deviation of noise in the background region. [22], is a commonly used performance measure for speckle suppression, which measures smoothness in regions that appear to be homogeneous which is defined as: ENL. Equivalent numbers of looks(ENR) ENL =乌 a 2 where 甘 and a denote mean value and standard deviation of the background region, respectively. SSIM. The structural similarity index measure (SSIM) is a method for predicting the similarity of two picture [23] It's formula is based on three comparison measurements between the two picture:x and y: where a, MY > 0 are constant,血,叭 and ax, ay are the average and variance of x and y. aXy is the covariance of x and y, c1, c2, C3 are constant too. The larger the value of the SSIM, the higher the similarity between the two pictures. PSNR. Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation [24]. Given a noise-free I with size of m x n and its noisy approximation K, MSE is defined as: The PSNR is defined as: Qualitative Evaluation As can be seen from figure 3, the proposed unsupervised learning method performs well in all test samples, eliminating image noise from different regions. Besides, the proposed model retains and enhances retinal layer structure and choroidal vessels. Although good results are obtained in undamaged images, our method can also have good results for some abnormal parts. In order to further evaluate the performance of our proposed method, figure 4 shows examples of denoising results of different methods, in which all the proposed methods use the same training set and test set. It can be seen that although BM3D method can remove noise well, the boundary of retinal layer is blurred. Moreover, Noise2Noise method remove noise well, it can be seen that the layers are not smooth enough and there are still many artifacts. Pix2pix method can be seen that it performs poorly in the test results and can not well suppress these noise, which will also lead to blurred retinal layer structure. In the results of WGAN and CGAN, the edge of retinal layer is distorted, while in the results of CGAN, the external limiting membrane (ELM) is not well enhanced. Compared with these methods, our proposed method removed most noise and enhanced retinal layer information and make its structure with clearer stratigraphic boundary, which proves the effectiveness of the proposed method. Quantitative Evaluation In order to quantitatively evaluate the speckle elimination performance, four indexes including SNR, ENL, SSIM and PSNR are listed in Table C. The performance of some typical traditional methods is shown in the table. The structural similarity and peak signal-to-noise ratio of BM3D are low, and the SNR and ENL are much higher than those of the target image, which may be caused by its poor ability to suppress speckle noise. Noise2Noise has a good effect on SNR, but its performance on the other three indicators is very poor. This may be due to artifacts between retinal layers. The middle section lists the quantitative performance of some of the most advanced depth learning based methods, including Pix2Pix, WGAN and CGAN. Compared with other deep learning algorithms, CGAN obtains the lowest ENL, and also obtains good results in SSIM and PSNR. But his SNR is still far from the clear picture. Among them, pix2pix algorithm performs poorly in four indicators. Although WGAN works well on SNR and ENL and is very close to the target, its SSIM and PSNR are not the best. Compared with other methods, the Dual-merged-CycleWGAN proposed by us has achieved the best results in all four indicators. Ablation Experiment In order to evaluate the contribution of the loss function used in the proposed Dual-merged-CycleWGAN, four SSIM It can be seen from Table D that the network with Wasserstrein loss function is compared with the original Cycle-GAN network. The performance of SSIM, PSNR, SNR and ENL has been improved. This may be that the Wasserstein distance has good smoothing properties compared with the JS distance of the original network, which can effectively solve the problem of gradient disappearance, make the denoising effect of the network better, the definition of the denoised picture higher and the structure retention more complete. Compared with the original Cycle-GAN network, the network using Multi UNet structure is almost consistent with the original results in SNR, but it is improved to a certain extent in PSNR, SSIM and ENL. This may be because the use of Multi UNet structure can retain more training parameters, make the mapping learned by the network more complex, and better deal with the denoising problem. A double layer Cycle-GAN network is used, which is compared with the original Cycle-GAN network. The four evaluation indexes have also been improved to a certain extent, which may be because the use of double layer Cycle-GAN process makes the relationship of network mapping simple, reduces the complexity of the network from noisy pictures to clear pictures, and improves the denoising effect. Finally, the proposed method is compared with the original Cycle-GAN network. Our proposed method is also superior to the original method in all indicators. The corresponding SSIM, PSNR, SNR and ENL are increased by 10.1%, 4.8%, 3.0% and 8.8% respectively. Compared with other previous methods, the best scores are obtained on all evaluation indexes except SNR. These results show the rationality of this method and the effectiveness of network structure design. Conclusion and Future work We proposed a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for OCT image denoiseing, which has remarkable performance with less unlabeled traning data. This is the first time using Dual-Merged Cycle-WGAN for OCT image denoising and achieved a good denoising effect. Unlike previous neural network algorithms, we used a more complex-structure, which allowed our model to learn more hidden features of the image through a large number of hideen parameters without setting too many hyperparameters. In addition, our new proposed Dual-Merged Cycle-WGAN, compared with the previous algorithm, can get a good noise-cancelling effect only by training a small amount of noise images. Experiments results show that our network obtains good subjective visual effect and higher evaluation objective indicators, which make retinal layer edge Still, there is some limitations in our study. Our data only have 21 noise pictures, which is not comprehensive enough. Although Our model performed well on our dataset , we believe if there is more datasets we can trained our model further and get better performance. Therefore, an important future research is to test our model on more actual OCT images and further optimize the hyperparameters of the proposed model to achieve better generalization capability. Besides, how to speed-up the parallel computation of our model is also an important aspect that we will continuously focus on.
2022-05-03T06:47:28.827Z
2022-05-02T00:00:00.000
{ "year": 2022, "sha1": "91c58824abccb8188b7e8ec3232e02f58c8746cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "91c58824abccb8188b7e8ec3232e02f58c8746cc", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
55744136
pes2o/s2orc
v3-fos-license
Financial Soundness Indicator, Financial Cycle, Credit Cycle and Business Cycle-Evidence from Taiwan Business cycle is the repeated expansions (from trough to peak) and contractions (from peak to trough) of real economic activity. Credit cycle is the cyclical process of the bank credit, ranging from short/long-term, loan to enterprise and loan to individual. Financial cycle reflects ups and downs in asset prices and financial institution's balance sheet. This paper examines the linkage among cycles as well as their lead-lag relationship. Theoretically, credit cycle is one of reasons driving business cycle, and financial cycle is a fundamental cause of credit cycle. Based on Taiwan’s quarterly data, this paper firstly identifies cyclical behavior of indicators of real economic activity, bank credit and assets prices in recent decade by defining expansion phases and contraction phases of cyclical variables. Second, this paper calculates concordance index to examine the degree of synchronization among cycles. Third, while the soundness for assets and liabilities of financial institution may drive financial cycle, this paper employs IMF’s Financial Soundness Indicator (FSI) as predictor of expansion and contraction phase of cyclical variables. Specifically, the paper assesses the health of bank’s balance sheet variables by Probit estimation on linkage between FSIs and expansion/contraction phase of cycle. Based on empirical evidence, the knowledge about the extent of assets/liability condition of financial institution corresponding to the expansion and contraction phase of financial, credit and business cycle is enhanced. Authority concerning about financial stability should oversight the performance of FSIs and then engage in prompt corrective actions when the level and volatility of those indicators sharply. Introduction Business cycle refers to cyclical behavior (boom and bust) of real economic activities (such as consumption, investment and total production) and nominal economic variables (such as price level).This phenomenon is driven by real factors (productivity), monetary and monetary/fiscal policy factors, expectations (including consumer and business confidence), international factors and their interaction.Existing studies such as Keynes (1936), Galbraith (1954) and Shiller (1989) emphasized that the fluctuation in economic and financial activity is mainly due to the "animal spirits", emphasizing the irrational behavior on interpreting ups and downs of living standard.Other researchers think that some economic expectations that are not realized constitutes the driving force of fluctuation of the economy (DeLong, 1992;Siegel, 1998;Edwards, Biscarri, & Perez de Gracia, 2003).contraction of investment, consumption and other real activities.Therefore, credit cycle is a source of business cycle and this proposition is supported by Eckstein and Sinai (1986) and Summers (1986).Fisher (1933) proposed debt-deflation theory, indicating that declining in asset price reduces net worth of people and firm such that consumer spending and corporate investment are contracted, thereby enabling the overall contraction of economic activity and price level is also screw down.Bernanke and Gertler (1989) believed that the decline in asset price deteriorates net worth of individual/firm, then expenditures, investment and economic activity shrinks which in turn makes asset prices further decline.Mishkin (2001) proposed that the fluctuation in asset prices, for examples, stock market price, real estate price and exchange rate cause fluctuation in investment and overall economic activity.While aforementioned study did not point out the lead-lag relationship between cycles of asset prices and real economic activities, this paper examines whether the fluctuation in asset prices affects the ups and downs of real economic activities.Bernanke (1993), Boivin, Kiley, and Mishkin (2010) emphasized the importance of credit supply by FIs.They argued that the asset-liability condition of FIs involves the availability of credit.Bank's liquidity and financial soundness affects its ability to meet the funding needs, thereby influencing the boom and bust of real economic activity.As financial soundness of banking system is healthy, normal liquidity and loan supply support sustained growth of real activity.On the contrary, deterioration of financial soundness limits the ability of loanable fund supply by banking system and therefore tighten the vitality of real economic activity.Based on this consideration, recent global development on financial prudential supervision paradigm on financial institutions has gradually adjusted from micro-prudential to macro-prudential supervision (Note 1).This paper proposes that credit cycle severs as one cause of business cycle, and the source of credit cycle is determined by soundness of financial system.Change in asset price has influence on bank's balance sheet and bank loan supply.As asset price decreases, financial health deteriorates and loan supply is reduced.When asset price rises, financial health amplified, and availability of bank loan increases.Adrian, Moench, and Shin (2009) confirmed that when U.S. securities brokers and dealers' leverage is higher and the higher growth rate of shadow banking assets, bank's return on assets tends to be greater.Ho (2011) indicated that financial disequilibrium is defined as too-fast growth on credit and asset prices and excessively expanding on balance sheet of FIs.Extremely growth on credit and asset price growth follows greater probability of financial crisis.Borio and Lowe (2002) found that credit gap (growth rate is higher than the average growth rate), share price gap, exchange rate gap and output gap have significantly predicting ability on prosperity phases as well as financial distress over three to five years.Trichet (2010) mentioned that the leverage, liquidity and asset price of FIs tend to depart from normal level before financial crisis. To sum up, business cycle is the variability of real economic activity.The cyclical behavior from expansion and contraction of credit amount is the credit cycle.The ups and downs of asset price and expansion/shrinking of balance sheet scale of FIs is called financial cycle.Borio et al. (2001), Danielsson et al. (2004), Kashyap and Stein (2004), Brunnermeier et al. (2009), Adrian and Shin (2010) have addressed that interaction among cycles may amplify economic fluctuations and possibly lead to serious financial distress and economic dislocations.This paper examines the linkage among cycles based on Taiwan's data.Asset price is proxied by real estate price, credit amount and share price.Health of FIs is proxied by 12 financial soundness indicators (FSIs) proposed by IMF.Probit estimation of regression relating 12 FSIs and different phases (expansion/contraction) of various cycles is also executed.This paper tries to improve the understanding of which FSI is significantly and relatively associated with expansion/contraction phase of financial cycle.Government policy aiming at financial stability should take care of excessive rise or fall of key FSIs on financial institutions to prevent from large fluctuation on financial/credit cycle, and in turn business cycle.Prompt correction actions are proposed as large swing of FSIs is presented.Next section describes cyclical variables, FSIs, data collection and econometric method.The third section is empirical result.The fourth section concludes. Asset Price and Credit-Real Estate Price, Stock Price and Bank Credit Existing studies have discussed the cyclical behavior of asset price.Borio and Lowe (2002) defined asset price prosperity (asset price boom) as the deviation from its own upward growth trend.Detken and Smets (2004) and Adalid and Detken (2007) classified asset price boom/bust as its growth rate exceeds long-term trend by 10%.Igan and Loungani (2009) analyzed the regional housing price of UK, U.S., Dutch and other developed countries, and found that in the long run, population as well as construction cost is main determinants of housing price.Changes in market structure and regulation are short-term factors.Cunningham and Kolet (2007) and Hall, McDermott and Tremewan (2006) found that the duration and amplitude of real estate price cycle are not identical across countries and time period.Mendoza and Terrones (2008) found that the frequency of credit boom is very high, and credit growth is usually accompanied by prosperity of economic output, consumption and investment.As real output is in expansion phase, credit amount tends to be higher over its growth trend.When real output is in contraction phase, credit amount is lower than its growth trend.Similar study is referred to Gourinchas, Valdes, and Landerretche (2001).Edwards, Biscarri, and Gracia (2003) found that stock price cycle is getting more synchronized (concordance) among different countries and financial markets.Terrones (2004) found that real estate price cycles are also synchronized among countries.Common factors are average interest rate around the world, output level of United States and world commodity prices.Phylaktis and Ravazzolo (2005) found that overall stock price of developed economy and emerging markets are all affected by exchange rate.Similar studies are referred to Gourinchas and Rey (2007) and Caballero, Farhi and Gourinchas (2008).Allen and Gale (2007) pointed out the correlation between financial cycle and financial crisis.When economic activity enjoys a long time warm, credit amount expands, asset price is inflated and currency is overvalued.These situations are then followed by a financial crisis.Reinhart and Rogoff (2009) found significant increase in real estate price and huge credit expansion are usually followed by a banking crisis. IMF's Financial Soundness Indicators (FSIs) Financial system instability affects the functioning of financial market/intermediate and in turn real economic activity such as consumption and investment.When financial crisis occurs, market order and investor's confidence is loss and recovery period is usually very long on the history.To strengthen the financial system stability, the International Monetary Fund (IMF) and the World Bank promoted the Financial Sector Assessment Program (FSAP) in 1999, provided a complete set of financial stability analysis framework to assist government with identifying the performance and weakness of financial system and potential economy instability.The program ensures stable financial sector development and assists government to develop appropriate policy and measure of financial health.Financial Soundness Indicators (FSIs) are assessments of soundness of financial institution, overall financial market, real estate market, and financial risk of corporate/household sector.Main purpose of FSIs is to monitor overall risk and vulnerability of financial system.The total number of FSIs is 39.According to the importance and information availability, indicators are divided into 2 sets, namely, Core Set and the Encouraged Set.Core Set consists of 12 core indicators to inspect financial stability of depository institutions.The data is easy to obtain and can be universally applicable to all countries.Encouraged Set consists of 27 indicators to inspect financial stability of depository institution, other financial institutions, non-financial corporate sector, household sector, market liquidity and real estate market.Because comprehensive information availability for the latter is relatively difficult in Taiwan, this paper only incorporates 12 indictors of Core Set into analysis. Twelve indicators of Core Set are, (1) CAR, regulatory capital to risk-weighted asset, (2) CAR1, regulatory tier-1 capital to risk-weighted asset, (3) NPLEQU, nonperforming loan net of provision to capital, (4) NPL, nonperforming loan to gross loan, (5) COVERAGE, loan loss provision to nonperforming loan, (6) OBSLOAN, loan-in observation to total loan (original indicator is sectoral lending divided by total loan, yet the paper changes it as OBSLOAN due to data unavailability), (7) ROA, return on asset, (8) ROE, return on equity, (9) NETINT, interest margin to gross income, (10) NONINT, noninterest expense to gross income, (11) LIQASSET, liquid asset to total asset, (12) LIQ, liquid asset to short-term liability.The definition and mnemonics of 12 indicators are summarized in Table 2 Data This paper collects individual financial data of all public banks (both domestic and foreign) and then averages them to obtain aggregate soundness indicators of banking sector in Taiwan.All Financial data and macro data such as asset prices and bank credit amount are collected from Taiwan Economic Journal (TEJ).The data frequency is quarterly and ranged from 2001Q1~2013Q4. Peak, Trough, Expansion, Contraction and Concordance Index Existing literature addressing identifying cyclical behavior variable is well-documented such as Burns and Mitchell (1946), Stock and Watson (1999), Harding and Pagan (2002a) and Backus and Kehoe (1992).This paper refers to Claessens, Kose and Terrones (2010) to identify cyclical characteristics (peak, trough, expansion phase and contraction phase) of financial variable as well as concordance between/among cycles. It is intuitive in identifying peak and trough of a cyclical variable.If a variable f t is on its peak, then the level at time t should be greater than of t + j and t-j, that is, (f t -f t-j )> 0 and (f t -f t + j )> 0, where the value of j depends on researcher's decision and data frequency (in this paper, j = 2). Similarly, if a variable f t is on its trough, the level at time t should be smaller than of t + j, t-j, that is, (f t -f t-j )< 0 and (f t -f t + j )< 0. A complete cycle consists of two phases: One is contraction phase, which is defined as a cyclical variable starts with its peak until trough.The other is the expansion phase, which is defined as a cyclical variable starts with its trough toward peak.While sometimes as peak and trough are very far from each other, four-quarter period before a trough could be seen as a contraction phase and four-quarter period before a peak can be viewed as an expansion phase.Expansion phase sometimes called recovery phase, which refers to the early stage (four or six quarters) of the expansion phase (Sichel, 1994).If cyclical variable is a financial one, recovery phase is also named as financial upturn, and contraction phase is called financial downturn. It is interesting to evaluate whether expansion/contraction phase a cyclical variable is also on expansion/contraction phase of another variable, or vice versa.Harding and Pagan (2002b) and Claessens, Kose and Terrones (2011) introduced Concordance Index (CI) to measure of co-movement between two cycles.The CI is calculated as: and C x t = 0 as x t is in contraction phase.If variable y t is in expansion (contraction) phase, C y t =1(0).Because a variable at time t is in either expansion phase or contraction phase but not both, CI measures the proportion of two sequences that are in the same phase.If index is equal to 1, meaning that two variables are perfectly pro-cyclical, because as x is on its expansion phase and y is always too. If index is equal to 0, two variables are perfectly counter-cyclical, because as x is on its expansion phase and y is always on contraction phase.Greater CI implies higher synchronization between cycles. Predicting Cycle Phase by FSIs This paper employs 12 FSIs to examine how FSIs are associated with expansion/contraction phase of cyclical variable by regression analysis.While predicted variable is binary or dummy, Probit estimation is executed.Regression equations are as followed: Probability (expansion phase t = 1) = F (FSIs t ) + U t (1) While 36 dummies representing dichotomous classification of expansion and contraction phase of cyclical variable, 12 FSIs serve as main explanatory variable for each regression simultaneously may drive multicollinearity problem, thus this paper employ simple regression such that each regression equation incorporates only one FSI at a time.Thus, for a given predicted variable, 12 regressions with single explanatory variable (FSI) are estimated. Concordance between Cyclical Variables Table 3 reports descriptive statistics.It is worth noted that we did not successfully identify any peak or trough for total bank long-term loan (LTLOAN), therefore it has no expansion or contraction phase and dummies for proxying its expansion phase (EXPLTLOAN) and contraction phase (CONLTLOAN) are both equal to zero. Table 4 reports pair-wise Concordance Index (Claessens, Kose, & Terrones, 2011) among cyclical variables.Several phases of cyclical variables have great concordance (pro-cyclical).For example, Shin-Yi Real Estate Price Index of Taiwan (2: REALTWN), has high concordance with (5: STOCK), (6: DSTOCK), (9: STLOAN), (12: DBUSLOAN), (15: GDP), (16: DGDP) and (18:IPI), means that real estate price has tendency of positive co-movement with stock market price, short-term loan, loan to business, GDP and industrial production.The explanation is quite intuitive.As the performance of stock market increases, house and company tends to engage in financial versus physical investment, thus why short-term loan and change in loan to business increase as well. Increase in real estate price also as wealth effect on aggregate demand to induce corporate sector to increase its industrial production.GDP is then forstered as well. It is also interesting that (9: STLOAN), (12: DBUSLOAN) and (14: DINDLOAN) have relatively greater concordance with real estate price and GDP, means that short-term loan, change in loan to business and change in loan to individual have greater positive co-movement with real estate price and economic activity.Based on the evidence of concordance between cycles, large swing in loan may correspond to volatile real estate price and economic activity, more specifically, sharpely increase in loan may attractive government's attention that real estate price and GDP are increase as well.The brief explanation of the evidence of Granger causality test is that stock price is the most exogenous variable that lead to change in other variables such as real estate price, GDP and industry production.Real estate price is then lead to change in loan amount and three real economic activity indiccators.Change in stock market performance has wealth effect on household and also increase corporate investment (by Tobin's Q Theory) and then lead to increase in real estate price and loan amount.Increase in loan amount facilitates household purchasing demand and corporate production which in turns lead to increase in gross domestic product.To examine the exogeneity of variables, we employ four cyclical variables, namely, real estate price (REALTWN), stock price (STOCK), loan amount (LOAN) and real economic activity (GDP) to estimate vector autoregressive (VAR) model with 4-period lag and then perform forecast error variance decompositions (VDCs) analysis.In Figure 1, we observe that stock price explains the greatest portion of forecast error variance of real estate price.Loan amount explains the second largest portion of forecast error variance of GDP.This means that stock price is main factor explaining real estate price and loan is also the driving factor on explaining real economic activities, supports the view that asset price drives credit supply and in turns leads to real economic fluctuation.1 for the definition of variables.Quarterly data is employed.Correlation coefficient followed by an asterisk means that it is at least 10% significantly different from zero. How FSIs Predict Expansion/Contraction Phase and negative, respectively, means that when stock market price is on expansion phase, bank's asset quality and liquidity condition become weaker.Third, correlation coefficient of EXPLOAN and NETINT is significantly negative, means that when total bank loan is on expansion phase, bank's net interest margin is deteriorated.In summary, empirical evidence generally shows that stock market performance is the most exogenous variable that may drive change in other variables such as real estate price.Increase in real estate price is concordance with increase in loan amount and other real activities.Banking supervisors may also take greater care of large movement in short term loan and growth rate of loan to business and loan to individual.It is more important that FSIs did good prediction about expansion versus contraction phases of real estate price.As a rule of thumb, while banking soundness worsened during contraction phase of real estate price, bank health indeed deteriorated even during expansion phase.Thus, government should always watch FSIs to prevent from banking fragility due to over lending on real estate sector as stock market performance is booming and real estate price is climbing. Conclusion Business cycle is the repeated expansion and contraction of real economic activity.Credit cycle is cyclical process of bank credit.Financial cycle is ups and downs in asset price and financial institutions' balance sheet.This paper examines the linkage among cycles and tests whether FSIs act as good predictor for expansion /contraction phase of cyclical variable.Based on Taiwan's aggregate data during recent decade, firstly, the paper identifies cyclical behavior of financial cycle, credit cycle and business cycle and defines expansion and contraction phases for them.Then, we calculate concordance index between cycles.Third, employ IMF's FSIs as predictor of expansion and contraction phase of cyclical variables by Probit regression estimation. The main finding shows that short-term credit, changed level in loan to business and changed loan to individual have higher concordance with other cyclical variable such as real estate price index and real economic activities. The evidence implies that in addition to stock market boom, banking supervisors should also pay attention on large swing in short term loan and growth rate of loan to business and loan to individual.Besides, several FSIs such as bank's profitability ratio, asset quality ratio, capital adequacy ratio and liquidity ratio act as predictor for expansion/contraction phases of real estate price.On average, banking soundness worsened during contraction phase as well as expansion phase of real estate price.Thus, ups and downs of real estate price should be monitored by banking supervisors to prevent from banking fragility. This paper helps us to obtain basic knowledge about how to identify cyclical behavior of financial versus real economic activity indicators, whether cyclical variables are co-movement as well as countercyclical and how the assets-liability condition of financial institutions change over expansion/contraction phase of financial cyclical variables.The authority should take care of soundness of financial system over different phase of financial versus credit cycle and engage in prompt corrective action as key soundness indicators, asset prive and bank credit amount have large fluctuation. Figure Figure 1.Forecast error variance decompostion analysis Table 2 . This paper identifies cyclical behavior of 18 variables.The first four are real estate price, proxied by Shin-Yi Real Estate Price Index of Taipei City (REALTPE), Shin-Yi Real Estate Price Index of Taiwan (REALTWN), consumer price index (CPI) of housing price (REALCPI) and CPI of housing rent (RENTCPI).Stock market price is proxied by weighted average index of the Taiwan Stock Exchange (STOCK) and quarterly change for weighted average index of the Taiwan Stock Exchange (DSTOCK).Bank credit is measured by total bank credit (LOAN), change in total bank credit (DLOAN), short-term loan (STLOAN), long-term loan (LTLOAN), total loan to public/private enterprise (BUSLOAN), change in total loan to public/private enterprise (DBUSLOAN), loan to individual (INDLOAN) and change in loan to individual (DINDLOAN).Real economic activity (business cycle) is measured by gross domestic product (GDP), change in gross domestic product (DGDP), retailed sales index (RETAIL) and industrial production index (IPI).Based onClaessens, Kose, and Terrones (2011), this paper identifies peaks and troughs of each 18 cyclical variables as well as expansion phases and contraction phases for all variables.Two dummy variables are constructed to indicate a given cyclical variable is lying on expansion or contraction phase.For example, a dummy variable representing expansion phase of REALTPE, namely, EXPREALTPE, is equal to 1 if given point of time REALTPE is on its expansion phase and 0 otherwise.CONREALTPE is equal to 1 if given point of time REALTPE is on its contraction phase and 0 otherwise.It is worth noting that EXPREALTPE plus CONREALTPE is not bounded to 1, because at some point of time, REALTPE is neither on expansion nor contraction phase.Similarly, 34 dummies representing binary description of expansion/contraction phase of remaining 17 cyclical variables are constructed, which are summarized in Table2.Constructing these dummies facilitates analyzing concordance between/among cycles as well as examining how FSIs are correlated with expansion/contraction of cyclical variables such as asset price and bank credit.Mnemonics and definition of variables Dummy variable for expansion phase of total loan to public/private enterprise EXPDBUSLOAN dummy variable for expansion phase of change in total loan to public/private enterprise EXPINDLOAN dummy variable for expansion phase of loan to individual EXPDINDLOAN dummy variable for expansion phase of change in loan to individual EXPGDP dummy variable for expansion phase of gross domestic product EXPDSTOCKdummy variable for expansion phase of quarterly change for weighted average index of the Taiwan Stock Exchange EXPLOAN dummy variable for expansion phase of total bank credit EXPDLOAN dummy variable for expansion phase of change in total bank credit EXPSTLOAN dummy variable for expansion phase of Total short-term loan EXPLTLOAN dummy variable for expansion phase of Total long-term loan EXPBUSLOAN CONSTLOAN dummy variable for contraction phase of Total short-term loan CONLTLOAN dummy variable for contraction phase of Total long-term loan CONBUSLOAN dummy variable for contraction phase of total loan to public/private enterprise CONDBUSLOAN dummy variable for contraction phase of change in total loan to public/private enterprise CONINDLOAN dummy variable for contraction phase of loan to individual CONDINDLOAN dummy variable for contraction phase of change in loan to individual CONGDP dummy variable for contraction phase of gross domestic product CONDGDP dummy variable for contraction phase of change in gross domestic product CONRETAIL dummy variable for contraction phase of retailed sales index CONTIPI dummy variable for contraction phase of industrial production index Note.Definitions of variables are from the Taiwan Economic Journal (TEJ) and http://www.sinyi.com.tw/news/article.php/3422. Table 5 reports Granger causality test result among main cyclical variables (REALTWN, STOCK, LOAN, GDP, RETAIL and IPI) and 12 FSIs.In Table5, YES means that one variable is Granger caused by other variable at 5% level of statistical significance, and NO means otherwise (does not reach statistical significance).Starting from the first column, we observe that real estate price (REALTWN) is caused by stock price (STOCK) and retail sales index (RETAIL).Total loan amount (LOAN) is caused by real estate price (REALTWN) and three real activity indicators (GDP, RETAIL and IPI).GDP is caused by real estate price (REALTWN) and total loan amount (LOAN).Other two real activity indicators are caused by asset prices-real estate price (REALTWN) and stock price index (STOCK).Above evidence shows that stock price drives real estate price, and real estate price drives supply of loan and real economic activities.However, there is a relative little evidence shows that FSIs lead financial cyclical variables, exceptions are total loan is Granger caused by net interest income, GDP is Granger caused by ROA of overall FIs and industrial production is Granger caused by non-interest income. Table 3 . Descriptive statistics of variables Note.This table reports basic descriptive statistics (mean, standard deviation, minimum and maximum) of variables.See Table2for the definition of variables.Quarterly data is ranged from 2001Q1 to 2013Q4. Table 5 . Table 6 reports pair-wise Pearson correlation coefficients among expansion phases of main cyclical variables (EXPREALTWN, EXPSTOCK, EXPLOAN, EXPGDP, EXPDGDP, EXPRETAIL and EXPIPI) and 12 FSIs.First, correlation coefficients on EXPREALTWN and CAR, ROA, ROE, NETINT and LIQ are significantly negative, means that when real estate price is on expansion phase, bank's capital adequacy ratio, profitability ratio and liquidity ratio are deteriorated.Correlation coefficients on EXPREALTWN and NPLEQU and NPL are significantly positive, means that when real estate price is on expansion phase, bank's asset quality is deteriorated.Second, correlation coefficients on EXPSTOCK and NPLEQU and LIQ are significantly positive Granger causality test among cyclical variables and FSIs This table reports Granger causality tests among main cyclical variables and 12 FSIs.YES means that row variable is Granger caused by column variable at 5% level of statistical significance, and NO means otherwise.See Table 2 for the definition of variables.Quarterly data is ranged from 2001Q1 to 2013Q4. Table 6 . Pair-wise pearson correlation coefficient among FSIs and expansion phase of cyclical variablesNote.This table reports pair-wise Pearson correlation coefficients among expansion phases of cyclical variables and financial soundness indicators.See Table Table 7 reports pair-wise Pearson correlation coefficients among contraction phase of main cyclical variables and FSIs.Correlation coefficients of CONREALTWN and OBSLOAN and NONINT are significantly positive (0.5572 and 0.4169), means that as real estate price is on contraction phase, bank's asset quality is deteriorated yet noninterest income is increased.Correlation coefficients of CONREALTWN and ROA, NETINT are significantly negative, mean that as real estate price is on contraction phase, bank's profitability and net interest margin is deteriorated.Table8reports simple Probit estimation on the relationship between expansion phase of cyclical variable and 12 FSIs.When predicted variable is EXPREALTWN, estimated coefficients on NPLEQU, NPL and NONINT are positive and significant (0.257, 3.3933 and 0.0353), means that bank's asset quality tends to decrease as real estate price is on expansion phase.Bank's noninterest income tend to increase as real estate price is on expansion phase.When predicted variable is EXPREALTWN, coefficients on CAR, COVERAGE, ROA, ROE and NETINT are significantly negative, means that bank's capital adequacy, asset quality, profitability and net interest margin are deteriorated as real estate price is on expansion phase.Table9reports simple Probit estimation on the relationship between contraction phase of cyclical variable and 12 FSIs.When explained variable is CONREALTPE, coefficients of OBSLOAN and NETINT are significantly positive and negative (0.2646 and -0.0904), respectively, represents that bank's asset quality and net interest margin are deteriorated as real estate price in Taipei is on contraction phase. Table 7 . Pair-wise pearson correlation coefficient among FSIs and contraction phase of cyclical variables This table reports pair-wise Pearson correlation coefficients among contraction phases of cyclical variables and financial soundness indicators.See Table1for the definition of variables.Quarterly data is employed.Correlation coefficient followed by an asterisk means that it is at least 10% significantly different from zero.
2018-12-07T14:45:58.946Z
2016-03-23T00:00:00.000
{ "year": 2016, "sha1": "43f2e2b8413f973344fc1ab69ada834ecfd9b733", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ijef/article/download/58393/31214", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "43f2e2b8413f973344fc1ab69ada834ecfd9b733", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
12380651
pes2o/s2orc
v3-fos-license
The Period adding and incrementing bifurcations: from rotation theory to applications This survey article is concerned with the study of bifurcations of piecewise-smooth maps. We review the literature in circle maps and quasi-contractions and provide paths through this literature to prove sufficient conditions for the occurrence of two types of bifurcation scenarios involving rich dynamics. The first scenario consists of the appearance of periodic orbits whose symbolic sequences and"rotation"numbers follow a Farey tree structure; the periods of the periodic orbits are given by consecutive addition. This is called the {\em period adding} bifurcation, and its proof relies on results for maps on the circle. In the second scenario, symbolic sequences are obtained by consecutive attachment of a given symbolic block and the periods of periodic orbits are incremented by a constant term. It is called the {\em period incrementing} bifurcation, in its proof relies on results for maps on the interval. We also discuss the expanding cases, as some of the partial results found in the literature also hold when these maps lose contractiveness. The higher dimensional case is also discussed by means of {\em quasi-contractions}. We also provide applied examples in control theory, power electronics and neuroscience where these results can be applied to obtain precise descriptions of their dynamics. Introduction Piecewise-smooth (piecewise-defined or non-smooth) systems are non-regular or discontinuous systems induced by dynamics associated with sharp changes in position, velocity, or other magnitudes undergoing a jump in their value. While exhibiting richer dynamics than they smooth versions, this type of systems provide more natural and simpler models in many applications from different disciplines, such as switching systems in power electronics, sliding-mode techniques in control theory, hybrid systems with resets in neuroscience or impact systems in mechanics. As a consequence, in the last decade, the interest in such type of systems has considerably grown (see [ML12] for a recent general survey). In particular, piecewise-smooth maps have focused the attention of many researchers, who have studied them from very different perspectives. One of the most reported dynamical aspects are the different bifurcations scenarios that they may exhibit, which turn out to be extraordinarily rich. Seduced by their graphical beauty and mainly supported by computations, many authors have recurrently observed and reported these bifurcation phenomena. However, although they assemble many well known results on circle maps, they have been considered as new and exclusive of piecewise-smooth maps. In this review article we show that, after some extensions and modifications, many of the well known results for circle maps developed in the 80's and early 90's can be used to obtain rigorous proofs of general results that can be systematically applied to piecewise-smooth maps. More precisely, we consider a piecewise-smooth map undergoing a discontinuity at x = 0, and consider as parameters the two lateral images at this point. To our knowledge, such a map was first studied by Leonov [Leo59], and later on obtained as an approximation of a Poincaré map of smooth flow near a homoclinic bifurcation of the figure eight and butterfly types [Spa82,Hom96a] (see §2.2 for more references). Depending on the signs and magnitude of the slopes of the map at both sides of the discontinuity, the bifurcation scenario in this two-dimensional parameter space may be very different. These signs are determined by the number of twists exhibited by the invariant manifolds involved in the homoclinic bifurcation. When the map is contracting in both sides of the discontinuity and both slopes have different sign, the so-called period incrementing scenario occurs. This bifurcation was reported in [Hom96a], and the details of the proof for this case were given in [AGS11]. However, if both slopes are positive, then the so-called period adding scenario occurs. Although this has been widely reported in the literature (see §2.2 for references) a complete proof has not been reported until now. We first note, in the increasing-increasing case, the piecewise-smooth map can be reduced to a discontinuous circle map. We then assemble and extend many well known results for circle maps to the discontinuous case in order to provide a straight path to prove this bifurcation scenario. This work is organized as follows. In Section 2 we provide basic definitions and a detailed statement of the results. In Section 3 we review and extend results for circle maps and quasi-contractions to provide a proof of the period adding bifurcation scenario (increasing-increasing or orientation preserving case). In Section 4 we revisit the proof provided in [AGS11] for the period incrementing bifurcation scenario (increasing-decreasing or non-orientable case). System definition and properties Let us consider a piecewise-smooth map (2.1) with x ∈ R and f L , f R smooth functions satisfying h.1 f L (0) = f R (0) = 0. h.2 0 < (f L (x)) < 1, x ∈ (−∞, 0) h.3 0 < |(f R (x)) | < 1, x ∈ (0, ∞) For convenience, we do not define at this point the map f at x = 0. Roughly speaking, the only difference between choosing the image of f at x = 0 the one given by one side, the other one, none (or even both) will imply the persistence or not of invariant objects (fixed points or periodic orbits) at their bifurcations. We will focus on this question whenever it becomes relevant. Due to condition h.1, the map (2.1) is discontinuous at x = 0 if µ L = µ R . As we will show, this discontinuity introduces exclusive dynamical phenomena which are not possible in smooth (C 1 ) one-dimensional systems. As discussed in the introduction, one observes similar phenomena (the bifurcation scenarios described below) in smooth flows of dimension three near homoclinic bifurcations. They are also observed in smooth maps, when restricted to the circle instead of R. This discontinuity represents a boundary in the state space abruptly separating two different dynamics: the ones given by the maps f L and f R . These dynamics will strongly depend on the sign of f R (x) on (0, ∞) leading to completely different families of periodic orbits. Note that the cases when f L (x) and f R (x) have different slopes in their respective domains are conjugate through the symmetry x ←→ −x. Moreover, as it will be shown below, when both f L (x) and f R (x) are decreasing functions in their respective domains, the possible dynamics will be easy. Therefore we can restrict to the case that f L (x) is an increasing function in (−∞, 0), as stated in h.2. One of the differences between the families of periodic orbits that one can find depending on the sign of f R (x) in (0, ∞) will be given by the sequence of steps that periodic orbits perform at each side of the boundary x = 0. Therefore, we will introduce the symbolic dynamics given by the following symbolic encoding. Given a point x ∈ R, we associate to its trajectory by f , (x, f (x), f 2 (x), . . . ), a symbolic sequence given by As a(x) provides a symbol of length one (L or R), one can omit the comas separating the symbols in Eq. (2.2) without introducing imprecisions. We call this the itinerary of x by f or the symbolic sequence associated with the trajectory of x by f . Let us now consider the shift operator acting on symbolic sequences Clearly, the shift operator satisfies Of special interest for us will be the symbolic sequences associated with periodic orbits. In this case, the symbolic sequences will be also periodic and we will represent them by the repetition of the generating symbolic block. For example, let (x 1 , x 2 ) be a periodic orbit, and assume x 1 < 0 and x 2 > 0. Then, the symbolic sequences associated with x 1 and x 2 are where ∞ indicates infinite repetition. Due to property (2.5), the shift operator acts on the generating blocks as a cyclic permutation of offset 1, as it moves the first symbol to the last position. More precisely, if (x 1 , . . . , x n ), x i ∈ R, is a periodic orbit of f and (x 1 . . . x n ) ∞ , x i ∈ {L, R}, is the symbolic sequence associated with the periodic trajectory of Hence, a periodic orbit of length n can be represented by n different symbolic sequences obtained by cyclic permutations one from each other. Definition 2.1. Symbolic sequences can be ordered by lexicographical order induced by L < R. That is, if and only if there exists some j ≥ 1 such that Definition 2.2. Given a periodic symbolic sequence x = (x 1 . . . x q ) ∞ , we will say that it is minimal if and similarly for a maximal symbolic sequence. Definition 2.3. We will say that a periodic orbit of length n is a x 1 . . . x nperiodic orbit, x i ∈ {L, R}, if there exists some point of this periodic orbit, Usually, in order to represent the symbolic sequence of a periodic orbit we will choose its minimal representative. For example, assume (x 1 , . . . , x 5 ) is a 5-periodic orbit such that I f (x 1 ) = (LRLLR) ∞ . Then, we will say that (x 1 , . . . , x 5 ) is a L 2 RLR-periodic orbit or a periodic orbit of type L 2 RLR, where the superindex 2 means that there two symbols L. Given a periodic orbit, besides its period, one important characteristic associated with its symbolic sequence is the number of symbols R and L and how are they distributed along the sequence. The latter will be explained in detail below. For the former we define the η-number: q be a symbolic sequence, and let p be the number of symbols R contained in γ. We then define the η-number as η = p q . (2.6) As it will detailed below ( §3.1, see Remark 3.4), under certain conditions, the piecewise-smooth map (2.1) becomes a circle map with rotation number the η-number. Hence, the η-number as defined above is frequently referred to as rotation number in the context of piecewise-smooth maps, even when these conditions are satisfied. We now focus on the question of, for a map f of type (2.1) satisfying h.1-h3, what are the possible periodic orbits, their symbolic sequences and their bifurcations in the parameter space given by the offsets µ L and µ R , µ L × µ R . To this end, we first note that, if µ L , µ R < 0, as the maps f L and f R are contracting, then f possesses two attracting coexisting L and R-fixed points x L < 0 and x R > 0: The domains of attraction are separated by the boundary x = 0. Indeed, if both f L and f R are increasing maps, then these domains become (−∞, 0) and (0, ∞), respectively. Note that, although x = 0 is not an invariant point (an equilibrium), it acts as a separatrix between these domains of attraction. If one of these two parameters vanishes and becomes positive, the corresponding fixed point collides with the boundary x = 0 and undergoes a border collision bifurcation. Depending on how the map f (2.1) is defined at x = 0, at the moment of the bifurcation the fixed point may still exist or not. Just after this bifurcation, the fixed point and no longer exists, while the other one remains and becomes the unique global attractor. Hence, the origin of this parameter space consists of a codimension-two bifurcation point. But then the question arises: what does exist when both parameters are positive and both fixed points disappear in border collision bifurcations? The answer to this question (summarized in §2.2 and §2.3) depends on the signs of the slopes of the maps f L and f R for x < 0 and x > 0, respectively. Recalling that the increasing-decreasing and decreasing-cases are conjugate we will only distinguish between two cases: increasing-increasing and increasing-decreasing. These are also typically referred as orientation preserving and non-orientable cases. As will be argued in 2.4, the decreasing-decreasing case is straightforward under the assumption of contractiveness. 2.2 Overview of the orientation preserving case: the period adding The bifurcation scenario when both f L and f R are both increasing is shown in Figs. 2.1 and 2.2. As shown in Fig. 2.1, there exist an infinite number of bifurcation curves emerging from the origin the parameter space µ L × µ R . These separate regions of existence of periodic orbits whose periods are given by "successive addition" of the ones of "neighbouring regions" 1 . We will make this As explained in §1, to our knowledge, this bifurcation scenario was first described by Leonov in the late 1950s ([Leo59, Leo60a, Leo60b, Leo62]), when studying by means of direct computations a piecewise-linear map similar to (2.1). Later on, this was studied in more detail in different contexts ([Spa82, CGT84, GPTT86, TS86, Mir87, PTT87, GGT88, GT88, LPZ89, GH94, Hom96b]), rediscovered in [AS06] and named period adding. This is precisely defined in Definition 3.5. Overview of the non-orientable case: the period incrementing For the increasing-decreasing case one finds the bifurcation scenario shown in As mentioned in §1, this bifurcation scenario was first described by Leonov [Leo59,Leo60a,Leo60b,Leo62]. Later on, it was studied due to its relevance in homoclinic bifurcations involving non-orientable homoclinic manifolds [Hom96b,GH94]. It was rediscovered in [AS06] when studying a linear piecewise-smooth map and named period incrementing. Full details of this proof were given in [AGS11]. Summarizing theorem The results presented in the previous sections are summarized in the following Theorem 2.5. Let f be a map as in Eq. (2.1) satisfying conditions h.1-h3. Let γ be a C 1 curve in the parameter space satisfying H.1 µ L (λ) > 0 and µ R (λ) > 0 for λ ∈ (0, 1) Then, the bifurcation diagram exhibited by the map f λ obtained from Eq. (2.1) after performing the reparametrization given by γ, follows a For a description of the bifurcation scenarios announced in i) and ii), the period adding and period incrementing, see Sections 2.2 and 2.3 for an overview, and Sections 3 and 4 for more details and proofs. We now explain briefly the case when condition h.2 is not satisfied (f L is decreasing): , only a LR-periodic orbit can exist for all λ ∈ (0, 1)., iv) if f R (x) > 0, for x ∈ (−∞, 0) is equivalent to ii) and interchanging L and R in the symbolic dynamics. Clearly, iii) comes from the fact that, under these conditions, f L ((−∞, 0)) ⊂ (µ L , ∞) and f R ((0, ∞)) ⊂ (−∞, −µ R ), and hence, due to the contractiveness of these maps, f must possess a LR-periodic orbit. The fact that the cases iv) and ii) are conjugate comes from applying the symmetries given by the change of variables x ←→ −x. 3 Orientation preserving case Detailed description We first provide a detailed description of the period adding bifurcation structure by stating some results which will be proved in the rest of this section. The bifurcation structure given by the so-called period adding is strongly linked with the ordering of the rational numbers given by the Farey tree. In order to explain how this tree is generated, we first define the Farey neighbours. Recall that a rational number p/q is irreducible if (p, q) = 1, where (·, ·) denotes the greatest common divisor. Definition 3.1. Given two irreducible rational numbers p/q and p /q , we say that they are Farey neighbours if |pq − p q| = 1. Remark 3.2. The definition of the Farey neighbours does not provide uniqueness. That is, given a rational number it does not have a unique Farey neighbour /1/2 and 1/3 are Farey neighbours, but also 1/2 and 2/5 are). Uniqueness is obtained when one also fixes the order, n, of the Farey tree shown in Fig. 3.5(a); that is, if one considers rational numbers with denominator smaller or equal to n. Definition 3.3. Given and irreducible rational number P/Q we define its Farey parents as the two unique irreducible rational numbers p/q and p /q such that and p/q and p /q are Farey neighbours. Equivalently, two Farey neighbours p/q and p /q produce the child P/Q given in Eq. (3.1). The rational number P/Q is also called the mediant of p/q and p /q . As shown in Fig. 3.5(a), starting with the Farey neighbours 0/1 and 1/1, the Farey tree is generated by obtaining rational numbers by adding their numerators and denominators. That is, given two Farey neighbours p/q and p /q , they generate the child (p + p )/(q + q ), which is an irreducible rational number. Note that this provides all rational numbers. This is because, for fixed n, the Farey tree contains in all levels up to n all combinations of rational number with denominator smaller or equal to n (also called Farey sequence of order n, see [HW60]). For example, the Farey sequence of order 6 is Hence, the Farey parents of an irreducible rational numbers are unique. Under conditions i) in Theorem 2.5, the map f (2.1) possesses periodic orbits for a set of values of λ with full measure. When this parameter is varied from 0 to 1, the periods of these periodic are given by the denominators of the rational numbers numbers given in the Farey tree (see Fig. 3.5(a)). As noted by some authors [FG11], other type of trees, as the Stern-Brocot, can also generate this sequence of periods. However, the most interesting relation between the Farey tree and the sequence of periodic orbits given by the period adding shown in Fig. 2.2 regards their associated symbolic sequences. To explain this, we recall that the η-number (Def. 2.4) is given by the ratio between the number of R s contained in a symbolic sequence and its length. When λ in Theorem 2.5 is varied from 0 to 1, η is continuous and monotonically increasing taking all values between 0 and 1. However, the set of values of λ for which η is irrational forms a Cantor set of zero measure. Such a function is called a devil's staircase, and it is constant (taking rational values) almost everywhere except for a Cantor set of zero measure. This function is shown in Fig. 2.2(b), and is the well known one formed by the rotation numbers of the periodic orbits of the so-called Arnold circle map when Ω is varied from 0 to 1. Note that η = 0 and η = 1 correspond to the fixed points L and R which undergo border collision bifurcations for λ = 0 and λ = 1, respectively. Although each periodic orbit for λ ∈ (0, 1) is in one-to-one correspondence with a rational number in the Farey tree through the η-number, their symbolic sequences are, in principle, not uniquely identified. For example, assume that for a certain value λ = λ 2/5 there exists a periodic orbit with η = 2/5. However, its symbolic sequence could be given by any of the generating minimal blocks L 2 RLR or L 3 R 2 , as they have length 5 and contain two R symbols. However, the symbolic sequence that corresponds to this periodic orbit is L 2 RLR. To see this, we identify each periodic orbit with a symbolic sequence on the Farey tree of symbolic sequences shown in Fig. 3.5(b), which is constructed as follows. Starting with the sequences L and R, one concatenates those sequences whose η-numbers are Farey numbers. By construction, this provides a unique correspondence between rational numbers and symbolic sequences through their η-number and the Farey tree. That is, to each rational number P/Q in the Farey tree one associates a symbolic sequence ∆ given by the concatenation where α and γ are the (minimal) symbolic sequences of the Farey neighbours rational numbers p/q and p /q , respectively: As we will be discussed in §3.5, this concatenation provides the so-called maximin sequences (see definition 3.36). We are now ready to provide a formal description of the period adding bifurcation structure. iii) if f λ∆ possesses a ∆-periodic orbit with η = P/Q, (P, Q) = 1, then there exist values λ α < λ ∆ < λ γ such that f λα and f λγ possess α and γ-periodic orbits, respectively, whose η-numbers are Farey neighbours and their mediant is P/Q. Moreover, ∆ is the concatenation of α and γ: ∆ = αγ Summary of the proof In order to facilitate the lecture of the proof of the result described in detail in §3.1, we provide a schematic summary of the necessary steps. 1) By performing the change of variables given in Eq. (3.2) we first show that a piecewise-defined map f as in (2.1) satisfying i) in Theorem 2.5 is an orientation preserving circle map (see Definition 3.6). Although this circle map will be discontinuous, in Section 3.3 we will show how classical results for continuous circle maps also hold. In particular, we will get (a) existence of the rotation number (Prop. 3.10) (b) Existence of a unique and stable periodic orbit when the rotation number is rational (Prop. 3.13 + contractivness). 2) In §3.4 we provide symbolic properties of the itineraries of periodic orbits of orientation preserving circle maps. More precisely: (a) In Prop. 3.31 we show that the symbolic itinerary of a twist periodic orbit is a p, q-ordered symbolic sequence. This identifies each periodic orbit of an orientation preserving map with a unique symbolic itinerary through its rotation number. (b) In Proposition 3.31 we show that the itineraries of p, q-ordered periodic orbits are given by concatenation of the itineraries of the periodic orbits with rotation numbers the Farey parents of p/q. Hence, they are in the Farey tree of symbolic sequences shown in Fig. 3.5(b). 3) In §3.4 we also study one-parameter families of orientation preserving (discontinuous) circle maps. This is equivalent to varying the parameter λ under the conditions of Theorem 2.5 i). Using the continuity of the rotation number (Proposition 3.20) and a result of Boyd (Theorem 3.34), we show that the rotation number (and hence the η-number) follows a devil's staircase leading to the adding scenario when λ is varied from 0 to 1. We emphasize that the previous steps provide (to our knowledge) the shortest path to prove i) of Theorem 2.5. However, as it will be explained along the rest of this Section, this is not the only one. In Section 3.5 we provide two partial alternatives to cover some of the steps mentioned above. These involve the concept of maximin sequences and quasi-contractions. Reduction to an orientation preserving circle map and some properties In this section we first show that, under condition i) of Theorem 2.5, the piecewise-smooth map (2.1) can be reduced to class of orientation preserving (increasing) discontinuous circle maps. Then, we present results on circle maps, which, are well known for continuous circle maps. However, by making little modifications of the classical proofs we adapt them to the discontinuous case. We will make this clear for each particular case. Let us observe that, under condition i) of Theorem 2.5, the piecewise-smooth map (2.1) is increasing on both sides of the discontinuity x = 0. Hence, all the dynamics are attracted into the interval Fig. 3.6). By identifying these two values, the map becomes a circle map which is continuous at x = 0, but not necessarily at x = −µ R ∼ µ L (see Fig. 3.6). As we are interested on varying the parameters µ R and µ L , we perform the change of variables which is homeomorphic, strictly increasing and maps −µ R to 0 and µ L to 1. Of special interest will be the value which separates the behaviour given by φ Hence, we have reduced the piecewise-smooth map to a class of circle maps, which we make precise in the following definition. Definition 3.6. We say that Due to the existence of c fulfilling C.3, such a circle map is of degree one, as the image of S 1 by f twists at most once around S 1 . When also considering condition C.4 we ensure that such an orientation preserving map is invertible. However, condition C.4 allows this class of maps to be not necessary continuous at x = 0. Hence, at x = 0 and x = 1 one can choose between the image from the left or right of x = 0 (or indeed any other value). When convenient, we will choose both values at x = 0 and x = 1 and deal with a bi-valued function. Notice that, at this point, we are not requiring contractiveness, and all results in this section hold also for expansive maps as long as conditions C.1-C4 are satisfied. Definition 3.7. Let f be a map satisfying conditions C.1-C4. We will say that F is a lift of f of degree N ≥ 0 if to the map If N = 1, we will refer to F just as the lift of f . From now on, we will restrict to lifts of degree one. Remark 3.8. Due to condition C.4, the lift of an orientation preserving map is an increasing map, probably discontinuous at the integer numbers, where it undergoes a positive gap. Remark 3.9. In definition 3.7 the value of the lift F at integer numbers, n, is not uniquely defined. When convenient, we will take the one given by f (1 − ) + n, f (0 + ) + n or both. The latter will lead to a bi-valued lift. The following result is well known and provides the definition of the rotation number of an orientation preserving circle map satisfying C.1-C4. This was introduced by Poincaré [Poi81] for homeomorphisms of the circle of degree 1 and later studied and extended to rotation intervals by many authors (see [ALM00] and references therein). For discontinuous orientation preserving circle maps, this was proven in [RT86,Gam87]. However, if one considers a bi-valued lift at integer numbers (see Remark 3.9), then the standard proof holds. exists and is independent of x. Proof. We give the standard proof a slight modification to overcome the discontinuities at integer numbers. As F is increasing, we get that, for 0 ≤ x ≤ 1 Noting that we take F be-valued at integer values, we can write this as where F (0) and F (1) can be each of the lateral values. Using that F (x + 1) = F (x) + 1 and applying it recursively to F n (1) we have that which we can write as Hence, taking limits we get and the limit does not depend on x. We next show that, indeed, this limit exists. We apply Proposition 1 of [RT86], which states that, if a sequence a n satisfies |a m+n − a m − a n | ≤ A, (3.5) for all n, m ≥ 1 and some constant A, then there exists some ρ such that The sequence F n (0) satisfies (3.5) with A = 1. To see this, we use that F (x + n) = F (x) + n to obtain, for any x ≥ 0, where [·] denotes the integer part. Then, taking x = F m (0), we get Note that this proof differs from the one given in [ALM00]. There, it is first proved that, if f possesses a periodic orbit, then this limit exists and is rational; then it is shown that it does not depend on x as above. In this approach we show the existence of this limit and then we discuss the dynamics of the map depending on its value. The previous result permits one to define the rotation number of a circle map f fulfilling C.1-C4. Definition 3.11 (Rotation number). Given a map f satisfying C.1-C4 and F its lift, we define the rotation number of f as for any x ∈ R. Remark 3.12. The fact that the limit given in Eq. (3.4) exists implies that F n (x) grows linearly with n: Recalling the properties shown in (3.3), the rotation number ρ(f ) can be seen as the average number of times that the lift F crosses an integer number per iteration. The next result is also standard result for continuous circle maps (Prop. 3.7.11 of [ALM00]), it provides the existence of a periodic obit if the rotation number is rational. Below we prove that it also holds for discontinuous circle maps. Proposition 3.13. Let F be a lift of an orientation preserving circle map satisfying C.1-C4. Then, ρ(F ) = p/q, (p, q) = 1, if and only if there exists Proposition 3.13, which we prove below, is stated assuming that the map F is bi-valued at integer values (see Remark 3.9). This ensures that one always finds a periodic orbit if the rotation number is rational. However, if one considers only one image at integer numbers, given by f (0 + ) or f (1 − ), one can lose the existence of a periodic orbit and get a ω-limit consisting of q points mimicking a periodic orbit. That is, one recovers a result given in [RT86], which we repeat below for completeness. Proposition 3.14 ([RT86] Th. 2). Let F be the lift of an orientation preserving circle map satisfying C.1-C4 and p, q ∈ Z with q > 0. If ρ(F ) = p/q then exactly one of the following holds. i) There exists x 0 ∈ R such that F q (x 0 ) = x 0 + p. ii) For all x ∈ R, F q (x) > x + p, and there exists x 0 ∈ R such that iii) For all x ∈ R, F q (x) < x + p, and there exists x 0 ∈ R such that Conversely, if either i), ii) or iii) holds then ρ(F ) = p/q. For completeness, we will provide a proof of Proposition 3.13 based on [ALM00] but adapting it to the discontinuous case. It relies on the following two lemmas. The first lemma is equivalent to Lem. 2 of [RT86]. We announced it as in Lem. 3.7.10 of [ALM00] but we adapt its proof to hold also for discontinuous circle maps. Lemma 3.16. Let F be the lift of a circle map satisfying C.1-C4, and let p ∈ Z. Then, Proof. One can proceed as in [RT86]. However, by considering that F is bivalued at integer numbers, we can proceed as in the proof for the continuous case [ALM00], which we repeat for completeness. We show i), ii) is analogous. Due to the strict inequality, there exists some Then, for all k we get Hence, as by Prop. 3.10 the rotation number exists and is unique, we have that which proves i) with ε = δ/q. Note that the proof given in [ALM00] is slightly different, as it does not use the uniqueness of the rotation number. The following lemma is trivial for the continuous case. Lemma 3.17 ([RT86] Lem. 3). Let F : R −→ R be a (not necessary continuous) non-decreasing map fulfilling F (x + 1) = x + 1 for all x ∈ R. Assume that F (x 1 ) > x 1 and F (x 2 ) < x 2 for some x 1 , x 2 ∈ R. Then there exists some x 0 such that F (x 0 ) = x 0 , and F is continuous on the left at x 0 . We provide more intuitive proof of this lemma. Proof. Assume F (x) = x for all x and recall that, if F undergoes a discontinuity, the jump must be positive. If F (0) > 0, F lies above the diagonal, otherwise it crosses it or undergoes a negative gap, which contradicts the existence of x 2 . If F (0) < 0, we argue that F lies below the diagonal, which contradicts the existence of x 1 . If it happens that F skips the diagonal by a positive jump, as F (1) = F (0) + 1 < 1, it must cross it afterwards or undergo a negative jump, which is not possible. Remark 3.18. The class of maps considered in Lemma 3.17 is not necessary a lift of a circle map satisfying C.1-C4, it may undergo discontinuities with positive jumps between 0 and 1. We now prove Proposition 3.13. by adapting the proof given in [ALM00] to the discontinuous case. Proof. (Of Proposition 3.13) Assume that F q − p has a fixed point. Then, we get that F nq (x 0 ) − np = x 0 for all n > 0, which implies that ρ(F ) = p/q. Assume that ρ(F ) = p/q and that F q (x) − p does not have a fixed point. Then, by Lemma 3.17, we get that either F q (x) − p < x or F q (x) − p > x. But then, by Lemma 3.16, there exists ε > 0 such that either ρ(F ) > p/q + ε or ρ(F ) < p/q − ε, which is a contradiction. Note that, besides the fact that we deal with a discontinuous lift, the previous proof differs from the one given in [ALM00] by the fact that we can use the existence and uniqueness of the rotation number provided by Prop. 3.10. Remark 3.19. Proposition 3.13 does not provide the uniqueness nor stability of periodic orbits. However, if, in addition to C.1-C4, one adds contractiveness, f (x) < 1 for x ∈ [0, 1], then one gets that the such periodic orbit is unique and attracting. The next result provides the continuity of the rotation number; i.e., if two lifts of orientation preserving maps are "close" (using the uniform norm), so are their rotation numbers. Note that, assuming that these maps are bi-valued at integer values, one can always choose the proper images to properly compare them. Hence, its proof for the discontinuous case becomes the standard one but taking into account this fact and Lemmas 3.16 and 3.17. Proof. We proceed as in [ALM00] by adapting the proof to the fact that F is bi-valued in order to to overcome the discontinuities at integer numbers. Assume ρ(F ) = p/q. Then, the function G(x) = F q (x) − p − x is away from zero. By Lemma 3.17 (applied to G(x) + x), we get that either G(x) < 0 or G(x) > 0 for all x ∈ R. Then, by Lemma 3.16, we have that either ρ(F ) < p/q or ρ(F ) > p/q, respectively. We now note that G(x) has degree 0; that is, . Hence, we can ensure that, ifG is in small enough neighbourhood of G, then eitherG < 0 orG > 0, respectively. This implies that, ifF is in a small enough neighbourhood of F , then either ρ(F ) < p/q or ρ(F ) > p/q, respectively. Hence, we have shown that, if there exist p 1 /q 1 and p 2 /q 2 such that andF is sufficiently close to F , then and hence F −→ ρ(F ) is continuous. The next Lemma shows that the rotation number is increasing as a function of F . Lemma 3.21. Let f and g be two orientation preserving maps satisfying C.1-C4. If f ≥ g then ρ(f ) ≥ ρ(g). Proof. Let x < c and F and G be two lifts of f and g such that F (x + 1) = F (x) + 1 and G(x + 1) = G(x) + 1. As F and G are increasing functions undergoing a positive gap at x = k, F n (x) > G n (x), even if, for some n, F n (x) or G n (x) reach any of the discontinuities at x = k. Hence, As the rotation number does not depend on x, we get ρ(f ) ≥ ρ(g). We next study properties of the periodic orbits of orientation preserving circle maps satisfying C.1-C4. These will be crucial to show symbolic properties of periodic orbits in §3.4. To this end, we provide the following definitions. Definition 3.22 (Well ordered sequence of points, twist periodic orbit). Let F : R −→ R be an continuous monotonically increasing map, and let be a sequence of q points. Let Π be the projection map and consider the points given by Π −1 (Π(x i )), which consist of adding integers to the original sequence (3.6). We write the points Π −1 (Π(x i )) as the sequence given by We call this a lifted sequence. Moreover, we say that the sequence (x i ) i∈N is p, q-ordered by F if it satisfies Notice that, if F is the lift an orientation preserving circle map f and a sequence is p, q-ordered by F , then, by definition, the sequence x 0 , . . . , x q−1 must be periodic by f . In this case, such periodic orbit satisfying (3.8)-(3.9) is called a twist cycle or twist periodic orbit of f , and the corresponding lifted sequence is a lifted cycle. The following result tells us that this is reciprocal; that is, any periodic orbit must be a twist cycle. be the points of a periodic orbit of an orientation preserving map f with rotation number p/q. Let F be the (degree one) lift of f . Then, the sequence of points given by Π −1 (Π(x i )) is p, q-ordered by F . The proof of this result is as in the continuous case ([ALM00] Lem. 3.7.4) but taking into account that the lift F is bi-valued at integer numbers. Proof. By properly choosing the image of F at integer values, the lift F restricted to the lifted cycle of the periodic orbit is an order preserving bijection. That is, by iterating the points x 0 < · · · < x q−1 one visits all the points of the lifted cycle exactly once, and the order is preserved (F (x j ) < F (x i ) iff x j < x i ). Hence, F (x i ) = x i+r for all i and some integer r. If not, then one gets that F (x j ) = x j+r1 and F (x i ) = x i+r2 . If r 1 = r 2 , then the number of points of the lifted sequence in [x j+r1 , x i+r2 ] and in [x j , x i ] does not coincide, and hence the order cannot be preserved. As the rotation number is ρ(F ) = p/q, we have that necessarily x i + p = F q (x i ) = x i+qr . As x i belongs to q-periodic cycle, we get x i+qr = x i + r and hence r = p and the cycle is p, q-ordered. The next result is also very well known for continuous circle maps. As in previous results, we provide a standard proof adapted for the discontinuous case. Proposition 3.24. Let f be an orientation preserving circle map satisfying C.1-C4 and assume that ρ(f ) ∈ R\Q. Then, if condition C.4 is a strict inequality, the ω-limit set is a Cantor set. f −1 , which is a function with a "hole", as it is not defined in U . After filling this hole with the value 1, we obtain the continuous function (3.11) (see Fig. 3.7) which has the same rotation number as f . Note that Hence, if for some n, c ∈ f n (U ), then g n (c) ∈ U , g n+1 (c) = 1, g n+2 (c) = c, and hence g has a periodic orbit of period n + 2, which is not compatible with having irrational rotation number. Next we show that lim is a Cantor set. As long as c / ∈ f n (U ), at each iteration, f n (S 1 ) = S 1 \f n−1 (U ) consists of subtracting a nonempty interval to the interior of f n−1 (U ) (see Fig. 3.8). By Corollary 3.3 of [Vee89], the total removed amount, ∪ i≥0 f i (U ), is dense. This comes from the fact that, although the map g is not differentiable, Denjoy theorem holds and, if the rotation number is irrational, g has no "homtervals"; in particular, U is not a homterval and the sequence g n (U ) is pairwise disjoint. Therefore, by construction, S 1 \ ∪ i≥0 f i (U ) is a Cantor set. Moreover, also by construction, the images of x − and x + are dense in this set. Thus, every point in S 1 \∪ i≥0 f i (U ) has a dense orbit and hence this set becomes the ω-limit of f . Remark 3.25. If condition C.4 is satisfied by an equality (the map becomes continuous) and the rotation number is irrational, then the ω-limit of f may also be a Cantor set or the whole circle. If f is C 2 , then Denjoy theorem holds and the latter occurs. However, if it is C 1 or C 0 , f may become a Denjoy counterexample (see [Nit71]), and it's ω-limit may be a Cantor set. To conclude this section, we recover a piecewise-smooth map as defined in Eq. (2.1). As mentioned above, after applying the change of variable given in Eq. (3.2), the mapf ( After applying the reparameterization γ given in Eq. (2.7), the value becomes an strictly decreasing function of λ such that lim λ→0 + c λ = 1 lim λ→1 − c λ = 0. Moreover, for λ = 0 and λ = 1, the mapf possesses fixed points at x = 1 and x = 0, respectively. Given a piecewise-smooth map f satisfying h.1-h3, we will define its rotation number as the rotation number of the mapf = φ • f • φ −1 obtained after a reduction to a circle map: For simplicity, we will also refer to the lift of f as the lift off . Remark 3.26. Let (x 0 , . . . , x q−1 ) be a periodic orbit of a piecewise-smooth map f satisfying conditions C.1-C4, and let x = (x 0 . . . x q−1 ) ∈ {L, R} q be its associated symbolic sequence regarding the symbolic encoding given in (2.3): Then, recalling Remark 3.12 and the fact that the image of x by the lift of f , F (x), crosses an integer number m when x < c+m < F (x), the rotation number of f , ρ(f ), becomes the η number defined in (2.6). That is, it becomes the ration of number of symbols R contained in x to the length of the sequence x, q. Symbolic dynamics and families of orientation preserving maps In this section we will show some dynamical properties of maps satisfying conditions C.1-C4, focusing specially on periodic orbits, their symbolic itineraries and their relation with the rotation number. In some of the results, we will additionally require the map to be contractive, however, we emphasize that, when not specified, the results that we present here do not require contractiveness. The main result in this section is the following Theorem, which is, recalling that periodic orbits of orientation preserving circle maps satisfying C.1-C4 are well ordered, a straightforward consequence of Proposition 3.33. Theorem 3.27. The symbolic sequence of the itinerary of a periodic orbit of a circle map satisfying C.1-C4 with rotation number P/Q is the one in the Farey tree of symbolic sequences associated with rotation number P/Q. At the end of this section (Lemma 3.35) we show that, for a piecewise-smooth map (2.1) satisfying i) of Theorem 2.5, the η-number defined in Def. 2.4 follows a devil's staircase. This is a consequence of Theorem 3.34 proved in [Boy85]. To show this, we start with the following Definition 3.28. We call W p,q the set of periodic symbolic sequences generated by a symbolic block of length q containing p symbols R: Of special interest will be the well ordered symbolic sequences contained in these sets: Definition 3.29. Let x ∈ W p,q be a periodic symbolic sequence. Consider the (lexicographically) ordered sequence given by iterates of x by σ (3.12) We say that the sequence x is a p, q-ordered (symbolic) sequence if In other words, σ acts on the sequence (3.12) as a cyclic permutation: there exists some k ∈ N, 0 < k < q, such that The next example illustrates the previous definition. The following result identifies the symbolic sequences of twist periodic orbits. 1-2). Under the conditions of Proposition 3.23, if the sequence of points (Π −1 (Π(x i ))) 0≤i≤q−1 is p, q-ordered by F (see Def. 3.22) then the itinerary I f (x i ) ∈ W p,q is a p, q-ordered symbolic sequence (see Def. 3.29). Proof. Let k be such that We first note that, for 0 ≤ i, j ≤ q − 1 we have We then write kp = q + r, r ≥ 0. Then, the results comes from the fact that which occurs iff 0 < r ≤ p. Assume that r > p. Then q + r − p > q and hence (k − 1)p > q, which contradicts (3.13). Letting x = I f (x 0 ), this implies where N is the smallest such that N k = 0 (mod q). The next result recovers what was stated in Remark 3.26: the η-number (Def. 2.4) for a piecewise-smooth map satisfying i) of Theorem 2.5 becomes the rotation number (Def. 3.11) of the orientation preserving circle map obtained after the change (3.2). Corollary 3.32. Let f be an orientation preserving map, and let x belong to a q-periodic orbit with symbolic sequence I f (x) = x ∞ ∈ W p,q . Then, the rotation number becomes ρ(f ) = p q . That is, it is given by the ratio between the number of R symbols contained in x and the period, q, of the sequence (η-number). Our next step consists of showing that the symbolic sequence associated with a periodic orbit of an orientation preserving circle map belongs to the Farey tree of symbolic sequences shown in Fig. 3.5(b). More precisely, we show that such a symbolic sequence is obtained by the concatenation of the symbolic sequences associated with the periodic orbits of the Farey parents of its rotation number. As a consequence of that, ones obtains Theorem 3.27, announced above. In Section 3.5 we will present an alternative approach using the maximin properties of these sequences. Proposition 3.33. Let f be an orientation preserving circle map (satisfying C.1-C4), and assume it has a periodic orbit with rotation number P/Q, (P, Q) = 1. Let ∆ be its symbolic sequence, and assume that ∆ is minimal, Let p, q, p and q natural numbers such that and let α and ω be the minimal symbolic sequences corresponding to the periodic orbits with rotation numbers p/q and p /q , respectively. Then ∆ is the concatenation of α and ω, ∆ = αω. Proof. Let 0 < z 0 < z 1 < · · · < z Q−1 < 1 be the periodic orbit with rotation number P/Q. Let us consider the sequence of points given by Π −1 (Π(z i )), where Π is the projection map given in (3.7). Let F be the lift of f . Due to Proposition 3.23, the points z i are P, Q-ordered by F : Let us now split the first Q points of the sequence z iP , 0 ≤ i < Q, in two subsequences defined as follows As before, we extend these sequences by means of Π −1 . On one hand, by construction, the sequence x n fulfills On the other hand, using that p q − pq = 1, we get that Hence, by translating the subindexes of y i → y i+pq we get a sequence of points 0 < y 0 < y 1 < · · · <y q −1 < 1 < y q < y q +1 < · · · < y 2q −1 < 2 < y 2q < . . . By Proposition 3.23, these sequences follow the symbolic dynamics given by the periodic orbits with rotation numbers p/q and p /q , as they are p, q and p , q -ordered: By construction, the symbolic sequence ∆ is the concatenation of α and ω, as we wanted to show. In the rest of this section we study the rotation number for families of orientation preserving circle maps; that is, under the variation of the parameter c in condition C.3, which, by means of the change of variables given in Eq. (3.2), is equivalent to the parameter λ of parametrizing the curve in parameter space mentioned in Theorem 2.5. Recalling Proposition 3.20 and Lemma 3.21, we already have that, when varying λ from 0 to 1, the rotation number (and hence the η-number) is continuous and monotonically increases from 0 to 1. In order to show that, moreover, it is a devil's staircase, we need to show that, in addition, it is constant for all values of λ except for a Cantor set of zero measure. This is will come from the following Then m(E) = 0, where m denotes Lebesgue measure, and furthermore E has zero Hausdorff dimension. Generalizations of the previous theorem can be found in [Vee89,Swi89]. Finally, we show that Theorem 3.34 extends to a piecewise-smooth system of the form (2.1) satisfying h.1-h3 and i) of Theorem 2.5 (or an orientation preserving circle map satisfying C.1-C4). This will prove that the η-number follows a devil's staircase. Proof. We note that the mapf λ is invertible. Let φ as in (3.2) and define Then, the inversef −1 λ (y) is an increasing expanding map with a "hole" for y ∈ [a, b] (see fig. 3.9(b)). As the trajectories of any point x ∈ [0, 1] byf do not reach the interval [a, b], we can proceed as the proof of Proposition 3.24 and complete the mapf −1 λ with a horizontal part equal to 1 for y ∈ [a, b]. This allows us to consider a map be a parameterization satisfying H.1-H3 of Theorem 2.5. Then, the interval [a, b] smoothly varies from [0, φ(f L (−µ R (0)))] to [φ(f R (µ L (1))), 1] when λ is varied from λ = 0 to λ = 1. Let g λ (y) by g(y) after applying the reparameterization γ. Then, we have Maybe we need the length of [a, b] to be kept constant. g λ (y) = g 0 (y) + λ, and hence we can apply Theorem 3.34 and get the result. The maximin approach In this section we first present an alternative approach to identify sequences in the Farey tree of symbolic sequences with the itineraries of orientation preserving circle maps. This approach is based on results presented in the 80's on maximin symbolic sequences (see Def. 3.41). In some sense, this approach provides stronger results, as they give more information about the symbolic sequences associated with periodic orbits of orientation preserving circle maps. In particular, given the rotation number, p/q, the maximin property permits to obtain the proper symbolic itinerary without constructing the whole Farey tree of symbolic sequences. Instead, one needs to find in W p,q which is the symbolic sequence that verifies a certain condition (maximin/minimax). The path of this alternative proof is as follows. By Propositions 3.13 and 3.23, we know that, if its rotation number is rational, p/q, then any periodic orbit has a p, q-ordered symbolic itinerary. Then, by Theorem 3.40, such an itinerary is maximin. Finally, Proposition 3.39 tells us that a symbolic sequence is maximin if and only if it belongs to the Farey tree of symbolic sequences. The most difficult step of this alternative approach relies on proving Theorem 3.40. Once we get that the symbolic itineraries of orientation preserving circle maps belong to the Farey tree of symbolic sequences, one proceeds as in §3.4 to study the bifurcation structure when the parameters c or λ are varied in order to prove i) in Theorem 2.5. In addition, in this section we also recover a result on quasi-contractions (see Def. 3.41) from the late 80's. This shows show that, in the contractive case, any periodic orbit must have a maximin itinerary, which permits to avoid going trough the concept of well ordered sequences. This is given by Theorem 3.42, which states that, under the assumption of contraction, any periodic orbit of an orientation preserving map has a maximin symbolic itinerary. However, not only the proof of this result is significantly more difficult than the ones shown in Sections 3.3 and 3.4, but also we emphasize that it only holds for the contractive case, whereas most of the results presented in those sections do not. We start by defining the maximin/minimax properties. Definition 3.36. We say that a symbolic sequence x ∈ W p,q is As it was proven in [Ber82,GIT84] using different techniques, the maximin and minimax properties are equivalent. That is, one has the following Theorem 3.37 ( [Ber82,GIT84]). Let x ∈ W p,q . Then, x is maximin if and only if it is minimax. Example 3.38. Let η = 2/5. Up to cyclic permutations, there exist only two periodic sequences in W 2,5 , which are represented by means of the minimal and maximal blocks Then, as the sequence L 2 RLR is minimax and maximin. The following results tell us that all the sequences shown in the Farey tree of symbolic sequences (Fig. 3.5(b)) (given by consecutive concatenation), are maximin (minimax). Proposition 3.39 ([Gam87] Proposition II.2.4-3). Let p/q < p /q be the irreducible form of two Farey neighbours, and let x ∞ ∈ W p,q and y ∞ ∈ W p ,q two maximin sequences, with x ∈ {L, R} q and y ∈ {L, R} q minimal blocks. Then, the sequences given by the concatenation of these two blocks (xy) ∞ ∈ W (p+p )/(q+q ) is maximin. Proof. One first sees that x and y belong to the same domain of some deflation; i.e, a map which collapses blocks contained in sequences as follows: Then, one proceeds by induction using that π(x) and π(y) are maximin, and also is π −1 (xy). See [Gam87] for more details. Finally, together with Proposition 3.31, the following theorem gives us and alternative way to the one shown in §3.4 to show that the symbolic sequences of periodic orbits of orientation preserving circle maps are given by the Farey tree of symbolic sequences. For completeness, we finally present an alternative way to obtain that, in the contractive case, any periodic orbit of an orientation preservinc circle map must have a maximin itinerary. This result was proven in [Gam87, GGT88,GT88] and presented in a quite abstract and general form. However, it can be applied, under the hypothesis of contraction, to orientation preserving circle maps (or equivalently to piecewise-smooth maps as in (2.1)) to show that the symbolic itineraries of periodic orbits of such maps are maximin and, hence, they are found in the Farey tree of symbolic sequences. We first difine a quasi contraction in an abstract way. Definition 3.41. Let (E 0 , d 0 ) and (E 1 , d 1 ) be two metric spaces and F 0 ∈ E 0 and F 1 ∈ E 1 two points. Then, a map is a quasi contraction if there exists 0 ≤ k ≤ 1 such that for any Note that the sets E 0 and E 1 can be of arbitrary dimension. In addition, notice that an orientation preserving circle map satisfying C.1-C4 under the assumption of contraction, is a quasi-contraction with E 0 = [0, c], E 1 = [c, 1], and F 0 = F 1 = c. Equivalently, E 0 = (−∞, 0], E 1 = [0, ∞) and F 0 = F 1 = 0 for the piecewise-smooth map (2.1). We now note that the definition of an itinerary of a trajectory defined in Eq. (2.2) extends naturally to the general context of a quasi-contraction. Given x ∈ E 0 ∪ E 1 and f as in Eq. (3.14), its itinerary by f becomes Note that also the η-number defined in Eq. (2.6) also extends to the type of maps as in Eq. (3.14). We then have the following result: Theorem 3.42. [GT88] Let f be a quasi-contraction. Then, i) f admits 0, 1 or 2 periodic orbits ii) any periodic orbit of f has an itinerary which is maximin iii) if f has two periodic orbits, then their itineraries belong to W p,q and W p ,q and p/q and p /q are Farey neighbours. The previous theorem was proved in [GGT88] (Theorem A) for the case that quasi-contraction constant k satisfies 0 ≤ k ≤ 1/2. The version given in [GT88] not only extends to the case of the quasi-contractions with 0 ≤ k ≤ 1, but it also provides more information; we give here a mutilated version which adapts to our purposes. Note that, by the uniqueness of the rotation number for the orientation preserving case, when applied to quasi-contracting maps, one only gets 0 or 1 periodic orbit. The situation in Theorem 3.42 for 2 periodic orbits is only possible in the non-orientably case, which will be treated in §4. The case for which one gets 0 periodic orbits correponds to an irrational rotation number. Non-orientable case The non-orientable (increasing-decreasing) case occurs under conditions ii) of Theorem 2.5. In this case, the map (2.1) becomes increasing for x < 0 and decreasing for x > 0. As mentioned in §1 and §2.3, this situation was discussed in [Hom96b] §3.3 and proven in full detail in [AGS11]. Independently some of these results were proven afterwards by other authors [JMB + 13], who seem not aware of these previous works. For completeness, we provide in this section an overview of this proof. We first observe (see Fig. 4.10) that a map f as in Eq. (2.1) satisfying conditions h.1-h3 and ii) of Theorem 2.5 is a map on the interval (4.1) We now wonder about the symbolic itineraries of periodic orbits for a map (4.1) under the mentioned conditions. Clearly, when the parameters µ L and µ R are varied along the parametrization (2.7) satisfying H.1-H3, the map (4.1) becomes negative for x ∈ [0, µ L ]. Hence, periodic orbits cannot have symbolic itineraries with consecutive R symbols. In order to show that only possible itineraries for periodic orbits are of the form L n R, we define the sequences of preimages of x = 0 by f L and f R : a 0 = 0, a n = f −1 L (a n−1 ) if n > 0, (4.2) b n = f −1 R (a n ) if n ≥ n 0 . (4.3) where m indicates the length of the interval. Then we have the following Lemma 4.1. If f is a map of type (2.1) fulfilling conditions h.1-h3 and ii) of Theorem 2.5. Then there exists at most one a j (equiv. b j ) such that a j ∈ f r ((0, µ L ]) (equiv. b j ∈ (0, µ L ]). Using the property shown in Eq. and, since f R is a contracting one obtains m(f r ((0, µ L ]) < m([a n+1 , a n ]) ∀n. Therefore, at most one a n can be located in f ((0, µ L ]). The next Lemma tells us what symbolic itineraries are possible. Lemma 4.2 ([AGS11] Lem. 7). Let f be a piecewise-smooth map as considered above, and let x 0 < · · · < x n , x i ∈ [f R (µ L ), µ L ] be a periodic orbit. Then, I f (x 0 ) = (L n R) ∞ . The proof of this Lemma is in fact an extension of the arguments presented in [Hom96a] §3.3, and was provided in full detail in [AGS11]. We provide it here for completeness. Proof. If f has periodic orbit of period n ≥ 2, we necessary have f R ((0, µ L ]) ∩ [a n , a n−1 ] = ∅, which can be given due to one of the next three situations (see Figs. 4.11 and 4.12) S.1 a n−1 ∈ f R ((0, µ L ]) S.2 f R ((0, µ L ]) ⊂ (a n , a n−1 ) S.3 a n ∈ f R ((0, µ L ]) If S.1 holds, b n−1 ∈ (0, µ L ] and are continuous contracting functions which must have a unique (stable) fixed point. Therefore, two stable periodic orbits of type L n R and L n−1 R coexist. Note that for n = 2 this proves also the existence of a LR-periodic orbit. In the second case (S.2), b n−1 / ∈ (0, µ L ] ([0, µ L ] ⊂ (b n−1 , b n )) and f n L f R : (0, µ L ] −→ (0, µ L ] is a continuous contracting function which also must have a unique (stable) fixed point. In this case, there exists a unique periodic orbit of type L n R which is the unique attractor in (0, µ L ]. Finally, if S.3 holds, replacing n by n − 1 and arguing as in S.1, one has that a stable periodic orbit of type L n R coexists with a stable L n+1 R-periodic one. decreasing γ, S.3 holds, a L n+1 R-periodic orbit bifurcates and coexists with the L n R-periodic orbit. As this occurs for all n, this argument proves the bifurcation scenario described in §2.3
2015-12-10T09:34:10.000Z
2014-07-07T00:00:00.000
{ "year": 2017, "sha1": "c5de3ad2fbb2b0a745b9ff6ca1d2654e0a0877ce", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1407.1895", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2b67d2fbc8db50cdebdf26e254b2214dd9aa1369", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
18886638
pes2o/s2orc
v3-fos-license
Praziquantel degradation in marine aquarium water Praziquantel (PZQ) is a drug commonly utilized to treat both human schistosomiasis and some parasitic infections and infestations in animals. In the aquarium industry, PZQ can be administered in a “bath” to treat the presence of ectoparasites on both the gills and skin of fish and elasmobranchs. In order to fully treat an infestation, the bath treatment has to maintain therapeutic levels of PZQ over a period of days or weeks. It has long been assumed that, once administered, PZQ is stable in a marine environment throughout the treatment interval and must be mechanically removed, but no controlled experiments have been conducted to validate that claim. This study aimed to determine if PZQ would break down naturally within a marine aquarium below its 2 ppm therapeutic level during a typical 30-day treatment: and if so, does the presence of fish or the elimination of all living biological material impact the degradation of PZQ? Three 650 L marine aquarium systems, each containing 12 fish (French grunts: Haemulon flavolineatum), and three 650 L marine aquariums each containing no fish were treated with PZQ (2 ppm) and concentrations were measured daily for 30 days. After one round of treatment, the PZQ was no longer detectable in any system after 8 (±1) days. The subsequent two PZQ treatments yielded even faster PZQ breakdown (non-detectable after 2 days and 2 ± 1 day, respectively) with slight variations between systems. Linear mixed effects models of the data indicate that day and trial most impact PZQ degradation, while the presence of fish was not a factor in the best-fit models. In a completely sterilized marine system (0.5 L) PZQ concentration remained unchanged over 15 days, suggesting that PZQ may be stable in a marine system during this time period. The degradation observed in non-sterile marine systems in this study may be microbial in nature. This work should be taken into consideration when providing PZQ bath treatments to marine animals to ensure maximum drug administration. It has long been assumed within the aquarium industry that PZQ is stable in a marine aquarium throughout this treatment period, and PZQ would therefore have to be removed by a chemical filtration unit (i.e., a carbon filter), chemical breakdown through ozone/UV disinfection or a complete water change after the bath treatment had finished. However, the same assumption existed for formalin, another drug used in marine aquarium systems, but a recent study indicated that these assumptions for formalin were largely false. Formalin is often prescribed as a 5-day bath treatment, but investigation showed that concentrations decreased below detectable limits in as few as 4 h (Knight, Boles & Stamper, in press). While some observations of a similar breakdown of PZQ exist (Crowder & Charanda, 2004;Marrero & Ellis, 2014), to our knowledge, no controlled study has been conducted. This study aimed to answer three questions: (1) How long does PZQ last in a closed, marine aquarium system? (2) Does the presence of fish affect PZQ degradation? (3) Does the elimination of living biological material impact PZQ degradation? METHODS This study utilized six, recirculating 650 L systems. The water in each system was mechanically filtered through a 50 µm pre-filter and a 20 µm pleated cartridge filter. It was biologically filtered through a trickle filter/sump containing 29.5 L of 1-inch bio barrels and bio-balls (Pentair Aquatic Eco-Systems, Apopka, FL, USA) and then passed through two 250 L aquarium tanks. The biological filter in all systems was started with the same bottle of Fritz-Zyme 9, Live Nitrifying Bacteria (Mesquite, TX, USA). All systems were filled with 30-32 ppt artificial salt water, which was made using Instant Ocean R (Spectrum Brands, Blacksburg, VA, USA) and carbon-filtered potable water. French grunts (Haemulon flavolineatum) were placed in three, randomly selected systems, such that there were six fish in each tank and twelve fish in each system. All fish were captive-bred at the University of Florida's Tropical Aquaculture Lab and were approximately 22 g when introduced to the study. All fish were fed at approximately 6% of their body weight in aquatic gel (5AB0 MTLS Aquatic Gel; Mazuri, St. Louis, MO, USA) daily. The remaining three systems were maintained as ''No Fish Systems'' systems and were ''fed'' 1.0 ppm of (NH 4 ) 2 SO4 (VWR, ACS Grade, Radnor PA, USA) daily, based on a calculated approximation of ammonia production in the fish systems. Each system was dosed with PZQ (P4668; Sigma Aldrich, St. Louis, MO, USA) at 2 ppm concentration, by mechanical dissolution of chemical inside a 350 µm net, added to the system sump near the pump intake. At a flow rate of 34 L minute −1 , the water in these systems turned over completely every 20 min. After 90 min, the whole system had been turned over 4.5 times, allowing the chemical to be evenly distributed through each system. A 500 mL water sample was collected with polypropylene bottles from each system ninety minutes after the initial PZQ addition and is referred to as the ''Day 0'' sample throughout this study. Every subsequent day after initial PZQ addition, another 500 mL water sample was collected from each system for a total of 30 days. Immediately after collection, these samples were stored in a cryofreezer (−70 • C) until further analysis could be completed. During the study period, no UV or carbon filters were used, and no water changes were conducted. Daily water chemistries (temperature, salinity, pH, dissolved oxygen) and weekly chemistries (total ammonia as nitrogen, nitrite levels and alkalinity) were taken to ensure the safety of the animals involved in the study. Temperature was maintained at 28 ± 0.7 • C. Salinity was 32 ± 2 ppt for all systems across all treatments. Dissolved oxygen was maintained at 6.9 ± 0.5 mg L −1 O 2 and pH remained at 8.01 ± 0.25. After the 30-day study period, the system water was run through the carbon filter for four days to remove any potential PZQ still present. The carbon filters were then removed, and the systems were maintained normally for 14 days as a washout period. A 20% water change was then performed to prepare for the next trial of the experiment. Trials 2 and 3 were set up exactly the same way as previously mentioned (Table 1). Trial 4 of this experiment was used as the control, and thus required that no fish or microorganisms be present in the systems. All fish were removed from the systems, and following 3 weeks of carbon filtration, all systems were bleached (75 ppm Cl − ). After 24 h, the appropriate amount of sodium thiosulfate (100 g/system) was added to each system to neutralize the bleach. After another full 24 h, PZQ was added to the system (2 ppm) and the sample collection process continued as it had for the first three trials. Trial 5 of this experiment was used as a second control in which all six of the systems were bleached at a higher concentration (200 ppm) to ensure complete eradication of any microorganisms. After 24 h the appropriate amount of sodium thiosulfate (220 g) was added to neutralize the bleach and another 24 h later, PZQ (2 ppm) was added to each system and the sample collection process continued as previously described. To test the degradation of PZQ in a completely sterilized system, this process was repeated in a 500 mL sample bottle. Any lab-ware that would come into contact with this ''system'' was sterilized for at least 20 min at 120 • C. Seawater from the same supply that was used in the first five trials of this experiment, but had never been exposed to PZQ was also sterilized for 20 min at 120 • C. Exactly 5 mL of 200 ppm PZQ stock solution was added to 495 mL of sterilized seawater in each of six 500-mL sample bottle and covered. Three of the six samples were put into the freezer (−70 • C) 90 min after PZQ addition, while the remaining three samples remained in a chemical hood at room temperature for 15 days and was then placed in the same freezer. Ethics statement This project was approved by Disney's Animal Care and Welfare Committee (IR #1102). PZQ concentrations never exceeded prolonged immersion therapeutic levels (2 ppm) for the fish in the study and both water quality and fish health (i.e., food consumption, lack of wounds, normal movement and socialization etc.) were monitored on a daily basis. Extractions To extract the PZQ from the water samples, samples were systematically removed from the deep freezer and allowed to thaw at room temperature overnight. The following morning, samples were extracted using a C-8 disk (Sigma Aldrich) and vacuum filtration as described by the C-8 disk instructions. Methanol (ACS spectrophotometric grade; Sigma Aldrich) and nanopure water were used to condition the C-8 disk before the sample was poured through. Once the sample was filtered, the collection vial was added to the vacuum apparatus, and 10 mL of acetonitrile (40%; Sigma Aldrich) was vacuumed through the filter three times into the collection vial for a total of 30 mL of acetonitrile solution (Crowder & Charanda, 2004). This eluent was retained and stored in a refrigerator (4.4 • C) until it was shipped with ice packs to the Georgia Aquarium for high performance, liquid chromatography (HPLC) analysis. Due to monetary constraints, it was impossible to analyze every sample collected (a total of 180 samples per treatment). Instead, a sample from every 3rd day from each system was analyzed initially. Once the time scale in which concentrations decreased below detectable limits was known, few additional samples were analyzed for finer resolution. HPLC analysis Aliquots of extracted samples were placed in 2 mL extraction vials and analyzed on a Dionex Ultimate 3000 UHPLC. An eluent solution of 40% acetonitrile was pumped at a 1.6 ml/min flow rate onto a Whatman Partisil 5 ODS -3 4.6 × 250 mm analytical column warmed to 30 • C. The samples were injected by the auto sampler warmed to 30 • C and analyzed by the UV dector at 210 nm. Chromatograms were produced and analyzed by the chromeleon 6.8 software. A five point standard curve ranging from 1 mg/L to 50 mg/L, an extracted known and a known were used as the quality control for each batch. Statistics To account for any errors associated with repeatedly sampling from the same systems and the unbalanced data sets between systems that resulted from our search for finer resolution, linear mixed effects models were used to determine the relationships between the parameters in this study. In all models, 'system' was used as a random factor, PZQ concentration was the response variable and treatment (presence of fish or chlorine bleach), day and trial were the explanatory variables. All explanatory variables were modeled independently and with all interactive effects. Model estimates were obtained using the lme4 package (Bates et al., 2014) in R Core Team (2014) using restricted likelihood estimates (REML). The 'anova' function in lme4 was used to compare models using the AIC (Akaike's Information Criterion), such that the model with the smallest AIC was the best fit model for the data. The 'anova' function in the lmerTest package (Kuznetsova, Brockhoff & Christensen, 2014) was used to compute degrees of freedom and p-values of the factors involved in the best fit models. RESULTS Eight linear mixed effects models were compared to determine the best fit model for the data (Table 2) and a model which includes 'system' as a random effect and an interaction between 'day' and 'trial' was found to be the best fit (Fig. 1). Since 'treatment' was not included in this model, the 'fish' and 'no fish' system data will be pooled for the remainder of this manuscript. After the first administration of PZQ (Trial 1), the chemical was no longer detected in the system water by an average (±SD) of 8 ± 1 days. PZQ degradation was faster in Trial 2, with concentrations declining below detectable limits by 2 days, and a similar pattern was seen in Trial 3, where PZQ was non-detectable by 2 ± 1 day. Both Trials 4 and 5 were intended to be utilized as controls in this experiment, and thus no PZQ degradation was expected. However, PZQ was no longer at a detectable limit in the systems by 3 days and 5 ± 3 days, respectively. In the completely sterile system, where PZQ was left completely undisturbed in 500 mL plastic containers, the PZQ concentrations remained unchanged over a 15 day time-span (one-way ANOVA, p = 0.523) (Fig. 1). Figure 1 Effects of day and trial on PZQ concentrations. The solid, dashed and dotted lines represent the LOESS smoothing of the data from this study with a shaded area representing the 95% confidence interval. Treatments (fish vs. no fish systems) have been pooled as the best fit model suggested that 'Treatment' was not a significant factor. Descriptions of each trial can be found in Table 1. For Trials 1-5, n = 6 systems and for Trial 6, n = 3. DISCUSSION This study had 4 major findings: (1) PZQ is stable for at least 15 days in a completely sterile environment; (2) PZQ, even in a naïve aquarium system, degrades below detectable limits in less than 9 days; (3) A second introduction of PZQ to an aquarium system results in faster PZQ degradation rates than the first treatment, (4) Fish do not impact the degradation of PZQ in a marine aquarium system. First, in our sterile trial, in which sterilized water in three separate, 500-mL polypropylene sample bottles were dosed from the same PZQ stock solution, the drug concentration remained statistically unchanged for 15 days, suggesting that it is stable under these conditions. Thus, any degradation observed in the other trials of this experiment is not thought to be attributed to natural PZQ instability in seawater, during this time frame. Second, the presence of fish was not a factor in the best fit models for these data, suggesting that the presence of fish did not significantly impact the reduction of PZQ in the water. However, this is not to say that marine fish are incapable of extracting PZQ from water. Rockfish (Sebastes schlegeli) treated with a high concentration, short-term PZQ bath (100 ppm for 4 min) were found to contain PZQ in their plasma and muscles for up to 72 h after treatment. However, these concentrations were minimal (maximum 5.96 µg mL −1 plasma and 0.49 µg g −1 muscle) (Kim, Kim & Kim, 2001). Similarly, when kingfish (Seriola lalandi) were treated with oral PZQ, the plasma concentrations of the drug reached non-detectable limits within 24 h of administration (Tubbs & Tingle, 2006a), further suggesting that the drug does not accumulate in the animal's body. Though PZQ concentrations in skin, blood or plasma were not measured in this study, the lack of any notable difference in PZQ concentrations in the water between systems with fish and those without fish indicate that the fish did not accumulate or remove a significant amount of the drug from the water, and thus are not likely the cause of its concentration decline. A similar trend was seen in the degradation of formalin in seawater, where doubling the fish densities in marine systems did not affect the rate of formalin degradation and was therefore not thought to be a contributing factor (Knight, Boles & Stamper, in press). Next, we observed PZQ degradation below detectable limits in every system and every experimental trial in this experiment within nine days of treatment. Each of these systems used the same mechanical filtration units, and thus any impact of mechanical filtration on degradation rate would be expected to be more similar between systems and trials. Instead, the overall degradation rate varied between trials such that the first exposure to PZQ showed the slowest rate of degradation, reaching non-detectable limits by nine days. Every subsequent trial exhibited PZQ degradation to the similar level by four days. Though this increase could be the result of mechanical filtration, it seems unlikely that the degradation rate in each system would remain consistent within a trial but not between trials. The increase in rate could be due to an increase in microbial populations (bacteria, protists, algae, phytoplankton, cyanobacteria etc.) which may be able to utilize PZQ as an energy source after the first exposure to the drug. This same pattern of rate increase after the first dosing was also seen when a 21.5 million L salt water aquarium was dosed with 2 ppm PZQ (Crowder & Charanda, 2004) suggesting that this is not merely a phenomena of small tank volumes. This variable breakdown rate was also seen repeatedly in the degradation of formalin in seawater (Adroer et al., 1990;Dickerson & Heukelekian, 1950;Kaszycki & Kołoczek, 2000;Pedersen, Pedersen & Sortkjaer, 2007) which was theorized as microbial degradation (Knight, Boles & Stamper, in press). To further support our hypothesis that the PZQ breakdown can be attributed to microbial breakdown, PZQ concentrations remained the same in our sterile systems, as previously described, indicating that PZQ concentration can remain consistent in seawater for at least 15 days. Overall, this indicates that PZQ does breakdown in marine ecosystems, yet, there is no evidence to suggest that breakdown observed in this study is natural for this chemical during this time frame, or that it can be attributed to mechanical filtration or removal by marine fish (H. flavolineatum), thus indicating the microbiota may be the cause. It is unclear at this time which, if any, microorganisms can metabolize PZQ from seawater, but any further research into this subject would greatly aid the field. In trials 4 and 5 of this experiment, fish were removed from the systems and each system was bleached at 75 ppm and 200 ppm Cl − , respectively. These trials were intended to act as controls by removing any biological material from the systems. However, PZQ still degraded within three days of treatment in these trials, indicating that either the breakdown is mechanical or that the bleach was not successful at removing all living microorganisms from the system. We discuss the unlikelihood of this degradation being an effect of mechanical filtration previously, and thus are left with the conclusion that microorganisms that can metabolize PZQ were not completely removed by the bleaching treatment. Within a marine aquarium, many microorganisms including bacteria, fungi, and protozoans inhabit a biofilm on surfaces within the aquarium system which is semiprotected from non-ideal environmental conditions with an extracellular polymeric matrix. Some biofilms have been described as so hydrophobic that they repel water and other liquids as well as Teflon (Epstein et al., 2011) making them impenetrable and unsusceptible to bleach. Though, some biofilms have been shown to be susceptible to killing by chlorine bleach at a concentration of 10 ppm after 127 min, or after only 12 min at a concentration of 90 ppm (Grobe, Zahller & Stewart, 2002). If the microorganisms in the biofilm survived the chlorine bleach treatment, they could then engage in cell dispersal after the chlorine had been removed, releasing many cells at once or few cells continuously to aid in re-colonization of the aquarium (for review on biofilm cell dispersion, see Solano, Echeverz & Lasa, 2014). In this study, the biofilm is thought to have survived 24 h at a concentration of 200 ppm Cl − and then released microorganisms back into the systems that were able to degrade PZQ in three days. The dynamics and likelihood of this hypothesis are unknown at this time, but studies investigating the rate at which a biofilm can re-colonize a system with microorganisms that can metabolize PZQ are needed to substantiate this claim. Ultimately, this study suggests that if marine aquarium systems are being treated with a bath of PZQ, it is important that the drug concentration is monitored throughout the treatment to ensure that therapeutic levels are being maintained. Further, in mature systems and systems that have experienced PZQ in the past, the rate of degradation may only increase, leading to non-detectable levels after as few as two days. Future studies which
2017-07-29T18:17:45.419Z
2016-04-04T00:00:00.000
{ "year": 2016, "sha1": "d943b06dd8867043c62a77955b103279a36789a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.1857", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d943b06dd8867043c62a77955b103279a36789a9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247193798
pes2o/s2orc
v3-fos-license
Voice-based control system for smart hospital wards: a pilot study of patient acceptance Background The smart hospital's concept of using the Internet of Things (IoT) to reduce human resources demand has become more popular in the aging society. Objective To implement the voice smart care (VSC) system in hospital wards and explore patient acceptance via the Technology Acceptance Model (TAM). Methods A structured questionnaire based on TAM was developed and validated as a research tool. Only the patients hospitalized in the VSC wards and who used it for more than two days were invited to fill the questionnaire. Statistical variables were analyzed using SPSS version 24.0. A total of 30 valid questionnaires were finally obtained after excluding two incomplete questionnaires. Cronbach’s α values for all study constructs were above 0.84. Result We observed that perceived ease of use on perceived usefulness, perceived usefulness on user satisfaction and attitude toward using, and attitude toward using on behavioral intention to use had statistical significance (p < .01), respectively. Conclusion We have successfully developed the VSC system in a Taiwanese academic medical center. Our study indicated that perceived usefulness was a crucial factor, which means the system function should precisely meet the patients' demands. Additionally, a clever system design is important since perceived ease of use positively affects perceived usefulness. The insight generated from this study could be beneficial to hospitals when implementing similar systems to their wards. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-022-07668-1. Introduction The increased demand for healthcare services, particularly in an aging society, has become a major challenge in developed countries. Previous studies have proposed that governments and healthcare providers such as hospitals and long-term care institutions should prepare a coping solution to ensure comprehensive care [1,2]. Intuitively, increasing the amount of medical staff to guarantee that every demand could be satisfied is the most straightforward way. However, human resources are precious. Therefore, using information technology to reduce human resources and improve nursing efficiency has become a crucial issue. Voice-based control, a specific IoT technology application used in healthcare environments, could improve healthcare quality and experience [3][4][5]. The Open Access *Correspondence: jj@tmu.edu.tw † Wen-Shan Jian and Chen-Ling Huang contributed equally. 1 School of Health Care Administration, Taipei Medical University, Taipei, Taiwan Full list of author information is available at the end of the article voiced-based control system was also an effective way to reduce contamination of surfaces, which could decrease the spread of healthcare-associated infections (HCAIs) with touchless computer interfaces [5]. Not only beneficial for preventing nosocomial infection, but the voicebased control system is also suitable for all hospitalized patients and helpful for post-surgery individuals [6]. Patients can lie on the ward bed and control the facilities in the room without contact, making the hospitalization more affable for mobility patients. Since patients are consumers under this condition, studying their points before the voice-based control system's actual implementation is essential. Previous studies conducted in laboratory environments showed that users considered positive with different interaction modes in the wards [3,6]. Here, we conducted an experimental study in the real clinical workflow. The technology acceptance model (TAM), a robust theory that systematically explains why users accept or reject new technology, has been widely used in many previous studies [7]. The TAM constructs (e.g., perceived ease of use, perceived usefulness, and behavioral intention to use) could further determine the key factor of whether people adopt a voice-based control system. The result could provide valuable insights into increasing patient satisfaction when healthcare providers design this kind of system. Thus, in this study, a structural questionnaire adopting TAM was used to evaluate the patient acceptance of the voice-based control system implemented in the real clinical workflow of academic medical center wards. Finally, we discuss the advantages of using the VSC system and identify potential opportunities for the hospital when building similar systems. Materials and Methods The study was conducted in a Taiwanese academic medical center between January 15, 2019, and September 4, 2019. A structural questionnaire was used as a research tool. We only included patients with behavioral capacity and stayed in the VSC ward for at least two days. For the mobility patients, their caretakers, such as parents or family members, helped those who were not able to complete the questionnaire to fill in. The questionnaire was given to patients and was filled onsite one day before the discharge. The incompletely filled questionnaire was excluded. System design and implementation-voice smart care (VSC) system Traditionally, patients control the equipment in the ward mainly through switches, controllers, or assistance from other people (Fig. 1). They need to use the corresponding way to control the wards' facilities. The solution of integration control was lacking. Thus, the VSC system, a novel approach that allows patients to control ward facilities through their own mobile devices, was implemented in this study. In order to achieve the purpose mentioned above, retrofit to the original ward facilities was implemented to what would be controlled by the VSC system. External hardware "smart switch module", was added to make these facilities were available controlled by the Wi-Fi signals. The original control way has been retained at the same time. Two VSC wards were retrofitted from general pediatric wards, took two weeks to complete construction, not limited to the children and adults for the move-in. The Swift programming language was used to build the graphical user interface (GUI) of the VSC system for the IOS system. By contrast, the Java programming language was used to create the Android system. The GUI is shown in Fig. 2. Users could tap the orange button in the middle and speak commands to control facilities in the ward directly. If they are not willing to speak, he/she can also use four buttons below to control the corresponding facilities. In this way, the dominance of facilities in wards can be concentrated on the mobile device, allowing patients to complete the instruction through their own devices. The VSC system was available in both Chinese and English. Technology Acceptance Model The technology acceptance model (TAM), a set of theories developed by Fred Davis in 1989, is a better way to explain why people accept or reject computers, especially for technology use behavior [8]. TAM was based on the theory of rational action, which was widely used in the prediction and interpretation of the acceptance behavior of personal information systems. User attitude, mainly influenced by the perceived usefulness (benefit from using the technology) and perceived ease of use (feel free of effort when using the technology), was an essential factor that influenced user behavior (actual usage), and finally decided the acceptance of the information system by the users in the end. Perceived ease of use has a positive effect on perceived usefulness; both perceived ease of use and perceived usefulness affected the attitude toward using, ultimately affected behavioral intent to use and the use of information systems (actual systems) (Fig. 3). According to the TAM in literature verification, we assessed the feasibility factors (construct) for the impact of a VSC system with perceived ease of use, perceived usefulness, attitude toward using, user satisfaction, and behavioral intention to use (Fig. 4). As a basis to verify the research structure, the following eight hypotheses were proposed in this study: H1: "Perceived ease of use" has a positive effect on "Perceived usefulness". H2: "Perceived ease of use" has a positive effect on "Attitude toward using". H3: "Perceived ease of use" has a positive effect on "User satisfaction". H4: "Perceived usefulness" has a positive effect on "Behavioral intention to use". H5: "Perceived usefulness" has a positive effect on "Attitude toward using". H6: "Perceived usefulness" has a positive effect on "User satisfaction". H7: "Attitude toward using" has a positive effect on "Behavioral intention to use". H8: "User Satisfaction" has a positive effect on "Behavioral intention to use". Questionnaire Design and Validation The structural questionnaire adopting TAM was used as a research tool in this study. Before implementing the formal questionnaire, five experts were invited to review the questionnaires (Appendix file). We used HTMT (heterotrait-monotrait ratio) statistics to evaluate convergent and divergent validity between different constructs, which indicate discriminant validity while the value is lower than 0.9 [9]. The Cronbach α value is 0.94, which is above 0.70, suggesting internal consistency reliability [10]. There were two parts to the questionnaire. The first part involved the basic information of the study object. The second part involved TAM, which included perceived usefulness, perceived ease of use, attitude toward using, behavioral intention to use, and user satisfaction. The Likert scale (strongly disagree-1; disagree-2; neutral-3; agree-4; and strongly agree-5) was used to assess the degree of agreement or disagreement [11]. Statistical Analysis Pearson's correlation (r) was used to analyze the correlation between research variables. A value of + 1 is a positive linear correlation, 0 is no linear correlation, and − 1 is a negative linear correlation. It means strong correlation, moderate correlation, and weak correlation when the absolute value of r = 1.00 ~ 0.70, 0.69 ~ 0.40, and below 0.39, respectively [12]. Multiple regression analysis was used to explore the relationship between one dependent variable and two or more independent variables in four different models [13]. The variance inflation factor (VIF) was used as an indicator of multicollinearity, of which less than ten was considered acceptable [14]. We used Statistical Package for the Social Sciences (SPSS Results A total of 32 questionnaires were sent out during the study period. In addition, two invalid questionnaires were excluded because of incomplete filling, no answering, or all options were unchanged. Finally, 30 valid questionnaires remained in our study. Demographic Characteristics In order to understand the basic personal information of the object, relative frequency distribution and percentage were used to describe the "personal background information". For all the patients, the respondents' basic information included gender, age, education, major language, the cell phone operating system, the daily use frequency of the VSC, and the reason why they don't want to use the VSC system (Table 1). Of the 30 respondents, 17 (57.7%) were aged 21-30 years, 12 (40.0%) were aged 31-40 years, and 1 (3.0%) was aged 41 years or older. More than half (60%) had a bachelor's degree or above regarding the highest education level. Regarding the cell phone operating system, among the 30 respondents, one-third (33.3%) respondents were using IOS. Regarding the daily use frequency of the voice smart care system, of the 30 respondents, 19 (63.0%) respondents used the system 1 ~ 5 times, 8 (26.7%) respondents used the system 6 ~ 10 times, 3 (10.0%) respondents used the system 16 ~ 20 times. Regarding why people don't want to use the voice smart care system, most respondents reported that the speech recognition quality was not good, followed by they are not needed. We also provide an open-ended section in the questionnaire to receive how the voice smart care system could be improved. Some users suggested that the system latency should be shorter and the accuracy of voice recognition should be increased. The user also mentioned the system needs to add the usage demonstration, dialect voice control version, and more controllable facilities. Measurement Model We used a 5-point Likert scale to evaluate the degree of agreement or disagreement for each question in the different constructs, which the result was shown in Table 2. There was a tendency for respondents to select agree and strongly agree while filling the questionnaires. However, the reverse coded questions B3 and C4 received the most number of disagree (N = 20). We reversed them by following rules before the next step: strongly disagree, disagree, neutral, agree, strongly agree attracted a score of 5, 4, 3, 2, 1, respectively. Table 3 provides the descriptive statistics, validity measurement result, and the values of Cronbach's α coefficient for each constructed variable. Compared to the mean values among these five constructs, perceived usefulness (PU) ranked the lowest with a score of 3.99 out of 5.00. Meanwhile, respondents' attitude toward using (AT) of VSC was the strongest, with a score of 4.24 overall. Concerning perceived ease of use (PEOU) to the system, this construct had the second highest-ranked score of 4.18. Cronbach's α analysis was used to measure the reliability among the questionnaire items. Based on the analysis, the internal consistency for each construct was greater than the minimum acceptable level of 0.7, indicating that the survey instrument was reliable and well-constructed. Some constructs like perceived usefulness (PU) and attitude toward using (AT) had an excellent internal consistency as their α coefficients were greater than 0.9. The HTMT statistics analysis result showed that most constructs had a good discriminant validity (< 0.82) between each other. However, the discriminant validity between attitude toward using(AT) and behavioral intention to use(BI) reached a value of 0.92, which indicated that their concept is similar. Hypothesis Testing In the first model, we explored the factor (perceived ease of use) that influences users to think it is beneficial while adopting the system. The perceived ease of use with In the second model, we explored the factor (perceived ease of use & perceived usefulness) that influences users' assessment of using the specific system. The two independent variables could effectively explain the 54% (R2 = 0.54) of the overall variance with statistical significance (F = 18.32, p < 0.001). The perceived usefulness had a positive and statistically significant effect on attitude toward using (β = 0.63, t = 4.36, p < 0.001), which supported H5. However, there was no significant correlation between perceived ease of use and attitude toward using (p = 0.14 > 0.01). Thus, H2 was not supported. It means that the user's evaluation of the system depends more on whether the system can provide substantial help instead of it is easy to use or not. In the third model, the influencing factors (independent variables) were the same as Model 2 but with different dependent variables (user satisfaction). Two independent variables could effectively explain the 52% (R2 = 0.52) of the overall variance with statistical significance (F = 16.93, p < 0.001). The perceived usefulness had a positive and statistically significant effect on user satisfaction (β = 0.63, t = 4.26, p < 0.001), which supported H6. However, there was no significant correlation between perceived ease of use and user satisfaction (p = 0.18 > 0.01). Thus, H3 was not supported. In the fourth model, we evaluated factors (perceived usefulness, attitude toward using, and user satisfaction) that affected behavioral intention to use. The three independent variables could effectively explain the 69% (R2 = 0.69) of the overall variance with statistical significance (F = 22.48, p < 0.001). The attitude toward using had a positive and statistically significant effect on behavioral intention to use (β = 0.70, t = 4.49, p < 0.001), which supported H7. However, there were no significant correlations between perceived usefulness and behavioral intention to use (p = 0.11 > 0.01), and between user satisfaction and behavioral intention to use (p = 0.36 > 0.01). Thus, H4 and H8 were not supported. Since their concepts are highly similar, we were not surprised by this result. Based on the above research results, the eight hypotheses of this research could be verified, and the results were summarized in Fig. 5. We could find out that the system could provide practical help (perceived usefulness) is the crucial factor determining users' willingness and satisfaction (attitude toward using and user satisfaction) which is driven by whether easy to operate (perceived ease of use). Discussion This study implemented the VSC, a voice-based control system in the hospital wards, and evaluated patients' acceptance of the system through structural questionnaires after practical use of VSC for more than two days. Many researchers have been studying the usability of IoT in smart hospitals [15][16][17]. In this study, we used the TAM to qualitatively explore user acceptance for a voice-based control system among hospitalized patients. The constants included perceived ease of use, perceived usefulness, attitude toward using, user satisfaction, and Highest education level High school education or lower 2 6.7 High school graduate System installation was not easy 1 3.6 Other alternative equipment (e.g., light switch, remote control) 4 14.3 Other reason 3 10.7 behavioral intention to use. The result generated from our study could provide valuable insights when hospitals plan to implement a similar system in their wards, ultimately to improve patient satisfaction. Furthermore, we used the TAM to determine the constants which affect user acceptance. Our result also indicated that three TAM constants (perceived ease of use, perceived usefulness, and use willingness) were crucial to the tendency of people when using the VSC or any other intelligent control system, which complied with the findings from other studies [18,19]. Perceived ease of use, a person who believes that using a particular system would be free from effort [8], was an important factor that affects the perceived usefulness. According to the question "The reason why I don't want to use the voice smart care system", poor speech recognition was the most frequently reported answer among the respondents. Intuitively, the poor speech recognition quality increased the difficulty while using the system, making users spend more time completing their tasks than initially expected [20]. However, our results showed that perceived ease of use was not a determinant of attitude toward using, consistent with past studies [21,22]. Perceived usefulness, a person who believes that using a particular system would enhance their job performance [8], was a crucial predictor by the past study [23]. Having the same result from our study, perceived usefulness positively affected both attitudes toward using and user satisfaction, which is promoted by the perceived ease of use. Thus, this auxiliary system should be useful to those in need, such as postoperative patients or disabled individuals. The VSC system allows patients to control facilities without assistance. Providing substantial help will make users have a positive attitude and satisfaction while using the new technology. Table 2 The properties of constructs and questions Noted: V = strongly agree(5); IV = agree(4); III = neutral(3); II = disagree(2); I = strongly disagree(1) The VSC system allows patients to control facilities in the wards by speaking commands to their mobile phones or tablets and has a high acceptance rate in our study. We believe that there are some potential opportunities to implement the analogous smart healthcare system in wards. Therefore, based on our research results, the suggestions for hospitals when designing a voice-based control system were as given below. First, since perceived usefulness was a major factor that decided the patient's satisfaction (H5) and attitudes (H6), ultimately whether adopted the system or not. The routine control tasks should be completed more efficiently and make more facilities controllable (e.g., air conditioner). Second, users should start intuitively without relying on a manual book since users only consider that system is useful under the premise of being easy to operate. Third, the comments and feedback given by the actual users of the system are crucial for other potential adopters to start using the system [24]. Having a satisfactory user experience at the beginning will be of advantage to promote the system in the future. By following these design guidelines for the voice-based control system, patient autonomy may be improved [25,26], which could decrease medical staff burnout [27] and provide possible benefits to patient safety at the same time. Limitations This study has several limitations. First, the generalizability of this study could be limited by its low number of participants. Since only two wards were reconstructed as our experimental field, our ability to collect data is restricted. Second, some of the data patients provided were rely on their remembrance. A more reliable data collection way such as system log files should be used. Third, participants are relatively young (age < 45) because the study was conducted in general pediatric wards. The elderly people's acceptance of the VSC system should also be studied. Four, the patient's condition was not included in the basic demography of the respondent. For patients with any mobility problems, this information should be stated in future studies. Lastly, the viewpoints of medical staff should also be evaluated. Our research only focused on the patients' point of view, but the medical staff 's opinion was also important. Multifaceted evaluation can make the system more comprehensive, which could genuinely reduce the medical staff 's burden. Conclusion We have demonstrated a solution to develop the VSC, a voice-based control system to interact with equipment in the ward. Our experience could potentially provide other hospitals while implementing a similar system. We also explored the key factors on patient acceptance of the system through TAM. The results showed that perceived usefulness was determined as a significant factor to impact the attitude toward using and user satisfaction, which means the system function should precisely meet the patients' demand. Additionally, a clever system design is important since perceived ease of use positively affects perceived usefulness. These results could expand the functionality of the hospital's traditional ward control system and shed light on the implementation of the voice-based control system.
2022-03-03T14:41:39.137Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "dd5db00da6e963f3f35f007f1e7856dc27509164", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "dd5db00da6e963f3f35f007f1e7856dc27509164", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
14919295
pes2o/s2orc
v3-fos-license
Transient Photoinduced Absorption in Ultrathin As-grown Nanocrystalline Silicon Films We have studied ultrafast carrier dynamics in nanocrystalline silicon films with thickness of a few nanometers where boundary-related states and quantum confinement play an important role. Transient non-degenerated photoinduced absorption measurements have been employed to investigate the effects of grain boundaries and quantum confinement on the relaxation dynamics of photogenerated carriers. An observed long initial rise of the photoinduced absorption for the thicker films agrees well with the existence of boundary-related states acting as fast traps. With decreasing the thickness of material, the relaxation dynamics become faster since the density of boundary-related states increases. Furthermore, probing with longer wavelengths we are able to time-resolve optical paths with faster relaxations. This fact is strongly correlated with probing in different points of the first Brillouin zone of the band structure of these materials. Introduction Polycrystalline silicon thin films have proven to be of major importance in the semiconductor industry [1][2][3][4]. It is considered an important component of silicon integrated circuit technology and is currently used in a wide range of device application. Although considerable effort has been carried out in the characterization of this material, little work has been performed on nanoscale film thickness. It is expected that a decrease in the film thickness to nanometer scale results in a modification of the energy states in these nanofilms. This is a result of two factors, one due to the large fraction of boundary-atoms to the total number of atoms and second because the core of the nanograins is transformed due to quantum size effect. Recently, preliminary ultrafast carrier dynamic results in these types of thin films [5] reveal various relaxation mechanisms under different growth conditions. As a consequent, the optical properties of these materials in steady state and photoexcited conditions change considerable providing a more applicable picture of these films in photovoltaic applications [4] and optoelectronic devices. The optical properties in steady state conditions for these materials have been recently published [6]. In that work, we reported on the determination of critical points (CPs) in the first Brillouin zone of the band structure for these films with thickness 5-30 nm using spectroscopic ellipsometry giving an important insight of the effect of the film thickness for the tunability of absorption. Based on the extracted CPs from that work, in this article, we report on a comprehensive study of transient photoinduced absorption (PA) of as-grown nanocrystalline silicon films with thickness ranges of 5-30 nm. From this study, we are able to timeresolve the relaxation paths within their complex energy band structure of the nanocrystalline silicon films. The influence of the grain boundaries and the quantum confinement effect due to the nanoscale grain size on the relaxation dynamics is examined in detail. Experimental Procedure In this work, the dynamical behavior of as-grown nanocrystalline silicon films following ultrashort pulse excitation is investigated through the temporal behavior of reflectivity and transmission [7]. The source of excitation consists of a self mode-locked Ti:Sapphire oscillator generating 100 fs pulses at 800 nm. A chirped pulsed laser amplifier based on a regenerative cavity configuration is used to amplify the pulses to approximately 1 mJ at a repetition rate of 1 kHz. These ultrashort pulses are used in a pump probe setup where the pump beam is frequency doubled at 400 nm using a non-linear crystal. A half wave plate and a polarizer in front of the non-linear crystal were utilized to control the intensity of the pump incident on the sample. A small part of the fundamental energy was also used to generate a super continuum white light by focusing the beam on a sapphire plate. An ultrathin high reflector at 800 nm was used to reject the residual fundamental light from the generated white light to eliminate the possibility of excitation by the probe light. The white light probe beam is used in a non-collinear geometry, in a pump-probe configuration. Optical elements such as focusing mirrors were utilized to minimize dispersion effect and thus broadening of the laser pulse. The reflected and transmission beams are separately directed onto their respective silicon detectors after passing through a bandpass filter selecting the probe wavelength from the white light. The differential reflected and transmission signals were measured using lock-in amplifiers with reference to the optical chopper frequency of the pump beam. The temporal variation in the PA is extracted using the transient reflection and transmission measurements, which is a direct measure of the photoexcited carrier dynamics within the probing region [8]. In this work, optical absorption fluence of *0.5 mJ/cm 2 has been used to excite the nanocrystalline silicon films and determine its temporal behavior. The estimated photogenerated carrier density was approximately 4 9 10 19 carriers/cm 3 for the fluence used in this work. The samples under investigation were very thin as-grown nanocrystalline silicon films with thickness in the range of 5-30 nm fabricated on a quartz substrate using low pressure chemical vapor deposition (LPCVD) of silicon from silane at 610°C and 300 mTorr. Transmission electron microscopy (TEM) and electron diffraction patterns taken in these nanofilms reveal the crystallinity of them with a grain size that is depended on film thickness. In the z-direction the grain size was approximately equal to film thickness, while in the plane (x-y directions) it was in the range of 5-19 nm in the case of the 5 nm thick film and in the range of 6-32 nm in the case of the 30-nm film [6]. A typical example of our images is shown in Fig. 1 for the cross-sectional specimen of 5, 15, and 30 nm film thickness, respectively. Figure 2 shows the temporal behavior for as-grown nanocrystalline silicon films over a range of 300 ps following excitation at t = 0 with 3.1 eV and 100 fs pulses. Here, we should point out that measurements were also carried out at carrier densities up to five times less than the above, and still showed similar relaxation rates with a linear peak signal dependence. The three graphs shown in Fig. 2 depict the typical temporal PA response corresponding to the nanofilms with thickness 5, 15, and 30 nm for various probing wavelengths between 400 and 980 nm. From these results, it is clearly evident the characteristic sharp increase in the absorption followed by a multi-exponential decay (Right) A high resolution TEM image from the 30-nm film is shown. It is obvious the bigger size of nanocrystals in this case towards the equilibrium. The observed rise time corresponds to the time which the photogenerated carriers need to reach the probing energy states lower to the initial excitation level providing the maximum coupling efficiency which results in the maximum induced change in the absorption. Results and Discussion Close examination of the required rise time (within the first few picoseconds) reveals a variation with film thickness between 5 and 30 nm. A typical example of this behavior is shown in Fig. 3 at 450 nm probing wavelength. For the 5-nm film sample, the rise time is estimated to be 1.2 ps whereas for the films of thickness 10-20 nm the rise time is only 600 fs. In addition, it is interesting to note that further increase of the film thickness causes an increase in the rise time (1.5 ps for 25-nm film). This is more obvious at 30-nm film where the rise time is estimated to be *25 ps. Based on the extracted CPs [6] this long rise time is attributed to state filling of the occupied surface-related states. This results in a negative contribution to the induced absorption from secondary excitations. Thus, the 25 ps is the estimated time for the photogenerated carriers to move out of these boundary-related states. This result is in agreement with previous degenerated [9] pump-probe measurements where a combination of state filling and PA has been observed at 400 nm with similar delay time at 30-nm film. Here, we should point out that data for the longer probing wavelengths (550-980 nm) show a rise time of approximately 300 fs for all the samples involved in this work. Figure 3 shows the temporal behavior of the films in the first few picoseconds when probing at 450 nm. It is obvious that with decreasing the film thickness we notice a The samples were excited at 3.1 eV and probed at different probing wavelength ranging from 400 to 980 nm faster carrier relaxation recovery in the first few picoseconds. This behavior occurs up to film thickness of 10 nm. A further decrease in the film thickness results in a substantial slower recovery. We believe this may be attributed to the exciton confinement in the surrounding interface of the formed nanograins due to the small thickness of the material. This results in altering the available decay channels of the photogenerated carriers. With regards to the long time behavior of the films from the data in Fig. 2, one may clearly deduce that the photogenerated carriers have the longest decays at the shortest probing wavelengths. Furthermore, the 30-nm film appears to have the longest recovery compared with the other samples with smaller film thickness. The relaxation dynamics of these nanofilms are rather complex and a fit to the experimental data requires a multi-exponential function. A satisfactory fit (v 2 [ 0.99) has been achieved with a minimum of a three exponential decay function for all the data signifying the multiple recombination channels available for the photogenerated carriers in these materials. The faster recombination components observed for the smaller thickness samples may be attributed to the increase of density of boundary-related states with decreasing film thickness. For a more quantitative analysis, we present the fitting parameters of the three-exponential decay model in Table 1. From these results, it is obvious that the fast relaxation mechanism becomes more important with increasing probing wavelength for the 5-nm film (see amplitude component represented by parameter-A in Table 1). With increasing probing wavelengths, the decays appear to be faster. This we believe is attributed to different probing regions of the first Brillouin zone of the band structure of these materials. Here, we should point out that the first two decays (s 1 , s 2 ) are strongly related with intraband relaxation mechanisms whereas the third slow decay (s 3 ) corresponds to the relaxations from states strongly correlated to the band edge of the materials. Furthermore, the observed decays for the 30-nm film at the shortest probing wavelength appear to be very long, which is attributed to the non-radiative relaxation dynamics of this thicker material approaching the bulk behavior of silicon. Conclusions We have investigated carrier dynamics in as-grown nanocrystalline silicon film with thickness in the range of 5-30 nm using non-degenerate pump-probe configuration. An observed long initial rise of the PA for the thicker films agrees well with the existence of boundary-related states acting as fast traps. Transient PA measurements reveal information about the relaxation dynamics within the complex band structure of these nanofilms. With decreasing the thickness of material, the relaxation dynamics become faster since the density of boundary-related states increases. Furthermore, probing with longer wavelengths we are able to time-resolve optical paths with faster relaxations. This fact is strongly correlated with probing in different points of the first Brillouin zone of the band structure of these materials.
2014-10-01T00:00:00.000Z
2007-11-27T00:00:00.000
{ "year": 2007, "sha1": "da6f30bc301a59a3e4a962c4efd29eee64e2deaf", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1007/s11671-007-9105-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c73fe7bcc9cde596f1203015e8cc8ae5242df44a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
16957899
pes2o/s2orc
v3-fos-license
Examining Social Adaptations in a Volatile Landscape in Northern Mongolia via the Agent-Based Model Ger Grouper The environment of the mountain-steppe-taiga of northern Mongolia is often characterized as marginal because of the high altitude, highly variable precipitation levels, low winter temperatures, and periodic droughts coupled with severe winter storms (known as dzuds). Despite these conditions, herders have inhabited this landscape for thousands of years, and hunter-gatherer-fishers before that. One way in which the risks associated with such a challenging and variable landscape are mitigated is through social networks and inter-family cooperation. We present an agent-based simulation, Ger Grouper, to examine how households have mitigated these risks through cooperation. The Ger Grouper simulation takes into account locational decisions of households, looks at fission/fusion dynamics of households and how those relate to environmental pressures, and assesses how degrees of relatedness can influence sharing of resources during harsh winters. This model, coupled with the traditional archaeological and ethnographic methods, helps shed light on the links between early Mongolian pastoralist adaptations and the environment. While preliminary results are promising, it is hoped that further development of this model will be able to characterize changing land-use patterns as social and political networks developed. OPEN ACCESS Introduction Sharing and cooperation between individuals and among groups can increase carrying capacity and survivability [1,2].However, sharing and cooperation can take many forms [1,[3][4][5], some more beneficial to the group, or individuals, than others.Here we ask "How do different sharing strategies impact survivability in a mobile pastoralist case?" This work is built on theory developed in the U.S. Southwest among sedentary farming populations, which we adapt and apply to mobile pastoralists of Mongolia.Specifically, we use theory developed by Hegmon [6] who simulated the rationale for exchange among Hopi based on three forms of logic: pooling of resources, independence (or hoarding of resources), and restricted sharing [6].Her research showed that in general restricted sharing is the best strategy, often working better than the other two strategies for both low-and high-production years.By creating rules for whom to share with and when, the Hopi are able to take control of their own needs first before assessing the needs of the community [6,7].We hypothesize that similar mechanisms were at play with Mongolian pastoralists in prehistory and that rules for whom to share with and when structure modern household configurations. Seasonal mobility is a common strategy employed primarily by hunter-gatherers and pastoralists living in highly variable, low productivity environments.These environments are characterized by little precipitation, high altitude/latitude, and/or extreme temperature (cold or hot).In these environments, people migrate within the landscape to take advantage of spatially dispersed, seasonally available resources.These patterns are not random, but rather the culmination of generations of accumulated traditional ecological knowledge [8,9].Mobility can be a wise economic adaptation with many variant forms (i.e., degree, frequency) [10,11], allowing mobile groups to inhabit regions that are not easily occupied by settled groups.Since the individual household units of a group are willing and able to move easily, the group by default is flexible, able to adapt or react to changing environmental, political and social challenges on short notice.In moments of crisis (i.e., high risk), adaptive solutions can be immediately implemented that will carry the household units through until the previously established habitation pattern can be resumed or a new pattern developed. In central and northern Mongolia, it has been noted [12] that following years of environmental catastrophe (usually resulting in great losses of livestock) household units, which usually numbered from two to four households, clustered into larger groups of five to seven-a cluster similar in size to Hegmon's ideal restricted sharing group [7].Over time, after households had recovered from herd losses, the units once again dispersed.This temporary fission-fusion cycle is an adaptation to the inherent risk of the low-productivity, highly variable environment in which these populations live.Because these households move every few months anyway, this fission-fusion cycle can occur rather rapidly.However, cooperation was not random, though the rules about who would help whom and under what circumstances were not immediately apparent.While this has been observed anecdotally, ethnographic data continues to be compiled to more rigorously characterize these cycles [12]. In patchy environments (i.e., environments where productivity is spatially and/or temporally variable), the ability to count on kin and neighbors during years of low productivity is essential for survival.Sahlins [5] demonstrated that cross-culturally there are distinct rules for the sharing of resources, and that small-scale societies worldwide have tactics for surviving bad years.Hegmon [6,7] has shown that restricted-sharing tactics are reliable for most years when both the pooling of resources and hoarding of resources are not optimal.Such strategies appear to be employed by mobile pastoral groups of modern Mongolia.The decision to aggregate with some groups as a form of risk management, while still excluding other groups from aggregation, exemplifies a strategy of counting on trusted kin or neighbors when times are difficult. For this research, developing agent-based models that imbue agents with decisions on where to locate and how to form cohesive groups will enable the examination of individual-level processes as reactions to environmental pressures.Costopoulos, Lake and Gupta tell us that "simulations can surprise us.Whether the surprises are due to our faulty understanding of the reality we are modeling or to our faulty modeling of the reality we are seeking to understand, they can force us to reexamine our assumptions and to push beyond the intuitive models of the past for which we often settle too easily" [13]. While decades of research have focused on cross-cultural studies of human systems, model building and theory testing provide a novel way to examine the world, helping to answer questions that would be unanswerable from traditional approaches [14].Instead of seeing the panoply of human culture and searching for patterns, we create theory, build models based on theory, and then compare output to data.Simulation enables us to test theories developed by anthropologists and historians from years of cross-cultural research [14].Lake estimates that works based on 54 different archaeological simulations were published between 2001 and 2010, showing the increasing value of agent-based modeling in archaeology [14], and the increasing ability for agent-based modeling to assess archaeological theories.Simulation does not more correctly address the archaeological record, but can address different questions than cross-cultural research can, and can easily help refine hypotheses of the archaeological record. Our paper explores the extent to which sharing practices would have helped the survival of mobile pastoralists in Mongolia and the surrounding regions of northeast Asia, and how a patchy environment led to the profusion of fission/fusion dynamics in Mongolia.In this model we define sharing and cooperation very simply: the likelihood that one household will merge with another household in need of assistance for one timestep, dividing resources equally between households.Seasonal movements characteristic of the semi-nomadic inhabitants of the region provide ample opportunity to examine such fusion and fission events.Groups fuse together when it is beneficial to do so, and then part ways when this approach becomes more advantageous.The presented model will help us to understand when fusion, fission, and sharing may be sought as a risk management strategy. Computer modeling is not a new approach for Mongolian case studies [15,16].However, these models approach the question of the emergence of empires and other large political formations based on a number of environmental and historical parameters.The model presented here is of an entirely different scale and is based in ethnographic and historical data.While previous models are designed to investigate political processes on an inter-regional scale, the model we are presenting here approaches the economic sphere from the domestic (i.e., household) viewpoint with the intention of creating results that are compatible with available ethnographic and archaeological data from the region. This paper is structured in the following way.First, we present the necessary background for how sharing strategies structure populations in northern Mongolia.We discuss ethnographic and archaeological evidence for sharing both in our study area and in other small-scale societies worldwide.We then present how agent-based modeling can help to examine sharing strategies, exploring how four different sharing strategies create different population levels in a variable environment.In the conclusion, we discuss the significance of our findings from employing a simple agent-based model and suggest ways in which this model may be refined for further future use. Background Mongolia is located in northeast Asia and is home to a primarily pastoralist population.In this study we focus on the inhabitants of the steppe and forest steppe in the central and northern portions of the country.These individuals primarily keep sheep and goats, with horses, cows, yaks and camels making up lesser percentages of their stock.Mongolian pastoralists derive much of what they consume from their livestock, and spend considerable time and energy ensuring the survival of their flocks.They rely on extensive traditional ecological knowledge that has been passed from generation to generation in order to minimize herd deaths during the difficult winter months.This knowledge includes ways to navigate both environmental landscapes and social networks.These modern day herders provide a useful ethnographic analogy, when applied cautiously, for the semi-nomadic nature of the early herders of Mongolia [12,17,18]. Today, Mongolian pastoralists move seasonally between summer and winter pastures.During summer, grazing conditions are good and herds are fattened for the long winters when grazing conditions are poor because of extended cold periods, little forage, and snow cover.These movements vary from a few kilometers to over 100 km between camps, though in central and northern Mongolia, where the authors have collected data, the average is usually 10-20 km [12].Typically households move two to five times annually following a similar mobility pattern year after year, returning to the same location at roughly the same time each season [12,[19][20][21][22].However, this pattern may shift from time to time in order to address a number of factors, including social conventions and environmental degradation or disaster. Ethnographic observation has shown that group size is not consistent from season to season or year-to-year [19].Each group of households, known as a khot ail, is made up of a number of nuclear families, each occupying their own dwelling called a ger (a round tent made of wood, felt and canvas or hides-also known by the Russian term "yurt").The size of the khot ail may vary from a single ger to more than 20 [23], although most never exceed 10 households.Average camp size appears to increase following environmental disasters as individual khot ails band together utilizing kinship and social ties as a failsafe to help recover from the losses of herd animals following these events.Gers from the same valley may group together, but larger risk mitigating groups that extend beyond valleys are also normal [12].If the individual khot ails are able to rebuild their herds, they may once again disband into smaller groups. A number of environmental conditions might present risk to the herds of Mongolia's rural populations.These include drought, bad winter storms locally known as dzuds, and the outbreak of epizootic diseases [24,25].Dzuds come in several varieties depending upon the particular environmental conditions.Types of dzuds include: deep snows, no snows, ice sheets, extended or extreme cold spells, and extreme overgrazing and trampling.These events occur periodically-every 5-10 years according to some studies [25].Dzuds may not impact regions equally creating a "patchy" environment on the large scale.While much of the discussion about mitigating the effects of dzuds has focused on aid efforts and observed rural to urban migration, a few sources have attempted to document the local adaptations and coping methods used by herders [25,26].Shelter may be improved including: alterations to structures, tunneling, insulating structures with dung, and bringing animals into the family ger.Of interest to this project are those strategies that rely upon social and kin networks to mitigate the impact of dzuds.Such adaptations include movement to other, less impacted areas (known as Otor, the movement from adjacent valleys up to hundreds of kilometers away), or joining forces with local family or friends in which mutual assistance may increase the chances of survival.Though these are short-lived events, they can be devastating.Cooperation is needed not to survive the Dzud itself, but to recover after great losses following the event.While there are clear advantages to the "movers", the "hosts" are willing participants in this coping method because of expected future reciprocity (much like insurance) and cultural expectations (e.g., an expectation to help out extended family members) [5]. It is clear that modern day Mongolia has a culturally dictated set of rules regarding sharing and cooperation.But how do these sharing strategies develop?A study by Fitzhugh et al. [27] helps inform us of the development of sharing strategies.They suggest that hunter-gatherer populations use exchange to build information networks that help establish relationships among different bands.These information networks connect households to an expanded pool of bands and/or tribes, allowing for group survival during catastrophic events.Additionally, they argue that high cost and low predictability/low productivity landscapes exhibit higher network connectivity than highly predictable landscapes.Furthermore, as populations become entrenched in an area they adapt to the environment and will rely on information networks only for highly unpredictable and catastrophic events, not for more predictable events.The high climatic variability of Mongolia combined with the potential for (and reality of) catastrophic failure would make the region more reliant on networks, according to this model [27]. Fitzhugh et al. [27] also state that groups should rely on more proximal bands for regularly occurring crises, such as low food production and droughts, while more irregular crises, such as earthquakes, would require a longer temporal memory of alliances with more distant allies.Therefore, since dzuds are unpredictable, but frequently recurring disasters, we can infer from Fitzhugh et al.'s model that Mongolian households would rely more on their neighbors for economic stability than on more distant allies. The Model A model is an idealized microcosm of a real system and is built on theory, or, as Clarke [28] states "models are pieces of machinery that relate observations to theoretical ideas."Using models built on simple rules can help eliminate poor hypotheses, and can help enable better understanding of a system.Even when a model is wrong (as "all models are wrong, but some are useful" [29] we can glean a better understanding of the system by slowly building the model up and studying simplified processes of complex systems. The agent-based model detailed in this paper was generated in NetLogo, although could have easily been written for any other modeling platform.The agents in this model represent an economic production unit, in this case a household (sensu [30]).There are twenty agents randomly seeded on the landscape at the beginning of the simulation.Each agent represents one of four distinct sharing scenarios, discussed below.The landscape is 40-cells by 40-cells wide, making a total of 1600 cells for the simulation window; each of these cells correspond to a catchment area (the area within which most household activities will take place) of a typical household of two square kilometers. The simulation window is divided into two sections-a summer landscape and a winter landscape.Each of these comprises 800 cells.This is admittedly reduced (modern herders may move several times in a single year) in order to preserve the simplicity of the model.The agents themselves migrate between the summer and winter landscapes each season (represented by one timestep, or tick in the model).In summer all land is productive.In winter, however, only half of the landscape (400 cells) has the possibility of being productive, with the other half of the landscape being composed of barren patches.These barren patches are populated in random locations at the beginning of the simulation.Additionally, 2/3 of the remaining winter cells (264 cells) begin as "brown" and regenerate according to the parameter "grass regrowth time", which was set at five timesteps for this simulation (five timesteps being the equivalent of five seasons, so if a patch dies during summer, it will regenerate five seasons later in winter.The decision for five timesteps is not based on any ethnographic fact, but was used for simplicity in this simulation.Future studies may test and alter this parameter.)While five timesteps may seem long, in northern Mongolia, at least, areas of intense utilization are still visible one or more years after a household has abandoned that area. To summarize, green patches are productive, brown patches are currently unproductive and symbolize those areas that can regenerate with time, while barren patches are never productive and symbolize those areas that will always be dead in winter.Both summer and winter patches can become brown with use, while only some winter patches will be barren.Barren and brown patches are not only representative of the absence of grass, but by logical extension, any reduction in productivity.For example, a dzud may not have a long term impact on grass growth, but the impact on productivity is great due to herd loss. When an agent lands on a cell, the agent automatically takes the resources that grow on that patch-in the simulation we call these resources "energy" and energy gained from patches is set by the parameter "ger gain from food".In this sweep energy was set to five.Here we have the logical proxy that a household is dependent on its herd, and herds depend on grass, so the quantity of energy (as measured by converting grass to stock) equals the quantity of sheep a household could have.While there may be more sophisticated ways of modeling energy as it moves through trophic levels, the correlation of herd size and grass was maintained in order to preserve the simplicity of the model.When a patch has all of its grass eaten, the patch turns brown and is unproductive; it will regrow the grass when agents move off of it according to the parameter "grass regrowth time". There is one final parameter related to patch productivity: the parameter "energy loss from dead patches".If at the end of an agent's move but before the end of the timestep an agent lands on a brown patch, that agent is charged energy according to that parameter.In this sweep that parameter was also set at 5. For clarification, while an agent will, in the end, be on a brown patch (because it eats the grass there) the agent is only penalized if it lands on a patch where there was no grass to begin with (if the patch was brown or barren upon landing there).This penalty is meant to simulate the costs that herders who are unable to find suitable locations in patchy environments may have to endure, which may include camping in less than ideal locations. Agents move each summer and each winter (mimicking Mongolian semi-nomadic seasonal shifts) by randomly choosing an unoccupied patch on the opposite side of their current simulation window (in summer they move to winter, and vice versa).If the agent lands on an unproductive patch, it checks its Moore neighborhood radius (each adjacent cell) and moves to a green patch in the radius; if there are no productive cells in the Moore radius the agent stays put until the next season.Agents are charged one energy unit to move, but are penalized five energy units if they stay on an unproductive patch.In the system we are simulating here, Mongolian pastoralists choose to move seasonally as the long term benefits of fresh pasture outweigh the relatively low, short term costs associated with moving. Agents in this simulation are incredibly myopic and have limited memory.However, agents do track the productive patches they have visited in winter and will choose to move to a previously visited patch (as long as that patch is empty, as only one agent can be on a patch at a time).If a productive patch they have previously visited is not available, the agent will simply move to an empty winter patch.Since half of the winter landscape is composed of patches that cannot produce food, remembering (and moving to) a patch that previously was productive gives the agents the ability to avoid accidentally landing on a completely unproductive patch.In this sense the agents are reactive to their environmental conditions, and can only work to improve their quest for energy in two ways: moving, or asking a neighbor of a similar strategy for help. Each winter, agents move from the summer cells to the winter cells.This migration is costless as long as a ger lands on a productive patch.If they land on an unproductive patch they are charged one energy unit to move in their Moore radius to a productive patch.Agents get five energy units each time they eat grass, and if they land on an unproductive patch they are charged five energy units at the end of the timestep.A lucky ger, landing regularly on good winter pasture, will be able to sustain and grow its energy stock. In summer, if agents have stored more than 20 energy units they have a 5% chance of reproduction by fissioning.When agents reproduce, the daughter household is spawned one cell distant from the parent cell and the stored energy of the parent household is divided evenly between parent and daughter households. Agents are initially created with four distinct sharing strategies.These strategies are related to the storage of resources and are tracked based on lineage.When agents are created they track their strategy as their lineage, and they never change strategies (agents do not learn).They pass these strategies on to their daughter households. Strategy A-agents will always merge with another household when asked Strategy B-agents have a 50% likelihood of accepting an offer of merger Strategy C-agents have a 25% chance of accepting an offer of merger Strategy D-agents will never merge When agents have less than 10 energy units they know they are approaching death.Agents that have less than 10 energy units will search within a radius of five cells for others in their same lineage-that is, the same cooperation strategy.The agent that is close to starvation will ask one of their lineage for help.Those that always share (Strategy A) will always say yes; Strategy B will only say yes with a 50% probability, and Strategy C only will say yes with a 25% probability.Those in Strategy D never ask for help, because help will never be given. Upon the acceptance of an offer of a merger, the merging agent donates all of its resources to the agent that accepted the offer of merger, and then households merge together.The combined households will then have more total energy, and perhaps a greater potential for fissioning the following summer.This method of merging has been observed ethnographically in the region.For example, during ethnographic interviews conducted in northern Mongolia in 2012, a recently merged household was encountered.Only one week before interviews a child had set their family's ger on fire.The family took their belongings and joined their herds with another household.The households would remain merged until they were able to acquire or build another ger, and accumulate enough resources to move out on their own once again. The simulation stops when either: (a) the simulation reaches 500 ticks (timesteps or seasons); or (b) there are no more agents on the landscape.Those households that survive to the end of the simulation, via luck and compassionate neighbors, represent the propagation of a kin descent group.As illustrated in the figures that follow, the most dynamic results occur in the first few hundred ticks.However, the simulation was run to 500 ticks in order to show the stability of the strategies over the long term. Results and Discussion For this study we examined how the variable "patch variability" affects the population of agents following the four different strategies.Patch variability reflects the likelihood at any timestep that a portion of the productive winter landscape will be unproductive.The different portions of unproductive landscape modeled can be related to both winter severity and differences in landscape in two or more compared regions.Seven values for patch variability were examined, displayed in Table 1. Table 1. Description of key parameter "patch variability" and what each of the values corresponds to.When patches are set to 0% all patches during winter can be productive, while each increment decreases the productivity by that percentage. Variability Description 0 During the winter all patches can be productive 5 During the winter, 5% of all patches can be unproductive 10 During the winter, 10% of all patches can be unproductive 15 During the winter, 15% of all patches can be unproductive 20 During the winter, 20% of all patches can be unproductive 25 During the winter, 25% of all patches can be unproductive 30 During the winter, 30% of all patches can be unproductive In addition to testing each of these values for patch variability we examined how each of the strategies fared when just one strategy was present per patch variability (for example, only strategy A was practiced), versus when all strategies were present simultaneously.In this way we can examine the direct effects of patch variability on one strategy, as well as the effects of different competing strategies and patch variability. While multiple parameters were written in to the simulation (such as how much energy can be gained from grass, how much lost when grass is dead, what percentage to reproduce) the main question in this research is: "How well do the different sharing practices cope with impact of variable weather (such as localized temperature and precipitation)?"The parameter patch variability takes the simulation window and every year makes patches unproductive according to the values in Table 1.This creates unpredictable patchiness of the environment.The list of other parameters in this simulation and their values is reported in Table 2. Table 2. List of key parameters and values that were swept across in this simulation.To note the parameter "winter patch variability" was the key parameter varied, with most other variables set to 5 for consistency.In total, 1750 runs of the simulation were completed for this study.For each of the seven values for the key parameter of patch variability, 10 runs were done with each of the five random number seeds so that outliers could be accounted for.Two separate experiments were done: looking at how each of these strategies fares when it is the only strategy represented on the landscape, and examining how these strategies fare when each strategy is represented on the landscape at the same time. Single Strategies As displayed in Figure 1, when only one strategy is present, regardless of which strategy is represented, population reaches carrying capacity and the mean population curve follows a regular logistic growth curve [31].The most striking difference in this graphic is the difference between Column A (100% sharing) and the rest of the columns (50%, 25% and 0% sharing).While the mean population curve for column A is similar to the mean population curves for each of columns B, C, and D, the variance around the mean is much more pronounced.This is true even in row 1, which represents 0% patch variability. The means for each value of patch variability are reported in Supplementary Figures S1-S7 so that means could be compared.With the means graphed in the same graphs the similarities among strategies are even more apparent.While there is some difference, those differences are small.The differences become larger as patch variability becomes higher-by the time patch variability is 30% the detriment of the all-sharing strategy becomes apparent.If agents always share, overall populations are lower, while restricted sharing strategies have higher populations.But even the difference between all share and the other strategies is minimal.As we will see below, this is in contrast to when each strategy is represented at the same time on the landscape.Columns marked C correspond to the 25% sharing strategy.Columns marked D correspond to 0% sharing strategy.Row 1 is 0% winter patch variability.Row 2 is 5% winter patch variability.Row 3 is 10% winter patch variability.Row 4 is 15% winter patch variability.Row 5 is 20% winter patch variability.Row 6 is 25% winter patch variability.Row 7 is 30% winter patch variability.Thus, tile c3 is the 25% sharing strategy under 10% patch variability.Y-axis goes from 0 to 150 households, X axis goes from 0 to 500 ticks.Red-dotted line corresponds to the standard deviation from the mean, while the gray lines show each strategy.Black central line corresponds to the mean of each strategy. Hegmon [6] found in her simulation of Hopi food sharing strategies that 100% cooperation was rarely the optimal strategy, but rather restricted sharing seemed to benefit the overall population the most.The results presented here compare positively with Hegmon's findings.While the mean of each of the sharing strategies reported here is similar, the variance in the 100% sharing strategy suggests that sharing with no restrictions could be detrimental, even in favorable conditions.While the mean of the all-share strategy is similar to all the other strategies (Figures S1-S7), the variance (Figure 1) belies the fact that an all-share strategy could have highly unpredictable outcomes.The tighter variance around the mean in the other strategies suggests that those strategies would have more predictable outcomes. Hegmon also suggests that hoarding (here represented at 0% sharing) is only a good option in the years of the worst productivity.When looking at Figures S1-S7 there appears to be no functional difference between any of the strategies, so this finding is not necessarily echoed in our results at this stage. Multiple Strategies Here we examine how populations respond to environmental stressors when each of the different strategies coexist in the same landscape.At the beginning of the simulation five agents of each strategy are seeded on the landscape.Experiments followed the same trajectory as above: with seven variables for patch variability and five random number seeds. First of note is the scale: when only one strategy is represented the sum of that strategy is higher than the sum of that individual strategy when there are multiple strategies present.In Figure 1 the scale is set to 150 agents, while in Figure 2 the scale is set to 60 agents.Because of this, in Figure 2 the variability might seem higher than it is when compared to Figure 1, but variance around the mean is only ever approximately 40 agents in both Figures 1 and 2 (Figure 1 strategy A excluded). Comparing the means of each strategy against one another on one graphic provides more helpful information.In Figures 3-9 each of the mean strategies are graphed on top of one another without the variance surrounding the mean as in Figures 1 and 2. This allows us to directly compare the mean strategies without surrounding noise. Figure 3 shows how each strategy fared against one another when the environment did not have any variability.To note, the 100% sharing strategy is never the best performing strategy.In these runs of the simulation, hoarding (0% sharing) is the highest performing strategy early in the simulation, while through time those gers that subscribe to a hoarding strategy decrease in number.The strategy of sharing 50% of the time, however, is very stable, and eventually becomes the most populous strategy. In a situation of stable population we may expect to see a convergence upon the mean as agents coalesce upon stable landscapes.A population under stress, however, will see a wide range of variation around the mean as agents attempt to maximize their resource acquisition while dealing with a volatile landscape (as seen above when only one strategy is represented).While the landscape in these runs of the simulation does not have year-to-year variability, the use of the land will create barren patches for five timesteps.Thus, early on gers that do not share do well on the landscape because there is little environmental impetus for sharing.With a predictable environment from year-to-year, independence can be a viable strategy.However, as the simulation progresses and gers create barren patches on the landscape from over-use, sharing can help gers avoid the variable productivity in the landscape they themselves have created.Columns marked C correspond to the 25% sharing strategy.Columns marked D correspond to 0% sharing strategy.Row 1 is 0% winter patch variability.Row 2 is 5% winter patch variability.Row 3 is 10% winter patch variability.Row 4 is 15% winter patch variability.Row 5 is 20% winter patch variability.Row 6 is 25% winter patch variability.Row 7 is 30% winter patch variability.Thus, tile c3 is the 25% sharing strategy under 10% patch variability.Y-axis goes from 0 to 150 households, X-axis goes from 0 to 500 ticks.Red-dotted line corresponds to the standard deviation from the mean, while the gray lines show each strategy.Black central line corresponds to the mean of each strategy.Figure 4 follows a similar trajectory to Figure 3, with 100% sharing never being the best performing strategy of the four strategies, no sharing performing the best early on, and restricted sharing performing the best toward the end of the simulation.Figure 5, however, begins to diverge from Figures 3 and 4. In this figure the winter landscape had 10% variability.The sharing strategies are each fairly stable, reaching their own respective carrying capacities of 20 to 25 households on the landscape.In these runs of the simulation hoarding (0% sharing) is early on the highest performing strategy.However, this strategy has high variability, likely due to the unpredictability of the landscape, and the similar effect of overuse.However, as only 10% of the landscape is variable (due to the environment), independent gers can make a living on the landscape with the simple rules created for this simulation.Once the environmental unpredictability of the landscape reaches 15%, hoarding is no longer the strategy with the highest population, and will only become optimal again when the landscape's carrying capacity becomes very low (unpredictability of 25%).In Figure 6 we can see that the means of the restricted sharing strategies (50% and 25% sharing) perform the best.Early in the simulation the 25% sharing strategy has the highest mean, while later in the simulation the 50% sharing strategy has the highest mean.This holds true for Figure 7 as well.When the environmental landscape exhibits 20% unpredictability in winter patches, restricted sharing strategies perform well.Note, however, that in the final years of these simulations, the mean of the 100% sharing strategy performs well, while the other strategies remain relatively stable. In Figure 8 hoarding once again is the highest performing strategy.While above we suggest that hoarding is a good strategy when the landscape is productive enough that sharing is not necessary, Figure 8 echoes Hegmon's [6] finding that hoarding is a viable strategy when the landscape is so poor that sharing will be detrimental for the overall population.Please note, however, that the difference in this graph between the restricted sharing strategies and the hoarding strategy is one household.In fact, many of the differences are rather small.Over the long term, however, even small differences in survivability (small adaptive advantages) may impact decision making. In Figure 9, when the landscape exhibits 30% unpredictability in winter patches, the averages of all of the four strategies are within one household.However, the 25% sharing strategy seems to have the highest mean on average.These results, when compared with Figure 2(c7) show that this strategy also has the least variance (and thus might have the most predictable outcome). Hegmon [6] found in her simulations that the all-share strategy was never the optimal strategy, and that hoarding is an optimal strategy for a population when the environment is highly unpredictable.These findings are comparable to our study results, although we show that there is little necessity for sharing in a highly predictable landscape.Only when the landscape becomes changed due to use, or the environmental predictability becomes great, do sharing strategies become necessary.Comparing Figures 1 and 2 we can see that some similar patterns are apparent-an "all share" strategy never outperforms the other strategies, but there appears to be little functional difference among the other strategies.Each strategy reaches the logistic population curve (the carrying capacity) in Figure 1, but in Figure 2 there is greater variability.When comparing the means in Figures 3-9 we see that restricted sharing seems to be the most beneficial strategy when environmental conditions are unpredictable. For a final means of comparison, we examined the statistical difference among the strategies with a Kolomgorov-Smirnov analysis.Kolomgorov-Smirnov analyses allow for direct comparability of each of the simulated means to see if there are statistical differences between each of the strategies.We simplified these data into five time slices: 100 ticks (50 years), 200 ticks (100 years), 300 ticks (150 years), 400 ticks (200 years), and 500 ticks (250 years).Further, we compared the pair-wise difference between the following means: Strategy A to Strategy B, Strategy B to Strategy C, Strategy C to Strategy D, and Strategy A to Strategy D. Frequency differences as well as p-values to the 0.05 level are reported in Tables 3 and 4. Table 3 corresponds to Figure 1 (single strategies modeled) while Table 4 corresponds to Figure 2 (all strategies present). As can be seen in Table 3, when only one lineage is represented, 31 of the 140 K-S statistic values show clear statistical significance in their difference.In Table 4 we can see that 21 of the 140 values show clear statistical significance in their difference.Thus we can say that in 22% of the cases when only one lineage is represented there are real differences in the number of surviving households on the landscape, while when all lineages are represented 15% of the cases show real differences in the number of surviving households on the landscape.It is worth noting, the strongest difference is between strategy A (all share) and strategy D (no share) in both solo lineages and all lineages, with 17 and 11 cases showing statistical significance respectively.Little difference is seen in the restricted sharing strategies (50% and 25%) potentially showing that both of these are viable in most years and may be functionally the same. In Table 3, the highest and most significant variation seems to be related to reaching the environmental carrying capacity, which generally is reached between 100 and 200 ticks.Most other times variation is not significant except between extreme strategies in less variable landscapes.In Table 4, however, variation is related to the end of the simulation, potentially showing that as sharing strategies stabilize the differences among them become more pronounced. From Figures 1 through 8 and Tables 3 and 4 we may be able to interpret that during years of middling unpredictably, those households that do not freely share their resources with everyone (but do share with a select few) are likely to have their caloric needs met, are likely to reproduce, and are likely to survive into the next year.The significance in variation among the strategies suggests that there are real differences in all sharing, restricted sharing, and hoarding and, potentially, that individuals using those strategies would be able to see how well their strategy compared to other strategies.These findings are also echoed in Crabtree [1].Only during exceptional years would households want to horde their resources, potentially insuring their own survival at the detriment of others.Table 3. Results from Kolomgorov-Smirnov analysis on single lineage values.Data is simplified into five time slices: 100, 200, 300, 400 and 500 ticks.K-S values that show significance above a p-value of 0.05 are highlighted blue and show "Sig" in the significance column.This means that there is a large difference in the mean values for the lines during that tick for those variability values.Of note are the 17 K-S values that show as significant between Strategy A (all share) and Strategy D (no share). Discussion Winterhalder and Leslie [32] have shown that long-term stochastic processes may affect how individuals react to environmental conditions and how they approach risk.In their model, demographic response to an unpredictable environment will, by nature, be nonlinear.For example, people cannot predict exactly how many children to have so that four children will grow into adulthood.The results of our above analysis echo those of Winterhalder and Leslie and show that individuals may indeed seek risk when environments are highly unstable in order to have the chance of surviving, and may be risk-averse when environments are stable.The high levels of variance observed in the model presented in this paper are at least partially reflective of the unpredictable, highly unstable environments in which this simulation occurs.While Hegmon [6] found that restricted sharing will be the most beneficial strategy for overall populations (restricted sharing should decrease variance), Winterhalder and Leslie's findings may highlight why highly variance will be beneficial in unpredictable environments.People may need to try multiple strategies to survive. Powers and Lehman [2] found that sharing increases the carrying capacity of a system.Such a result is potentially visible in our results as well.When environmental pressures become great, and households group together, the environmental pressures can become mitigated by the social sharing strategy.However, despite sharing strategies lessening environmental pressures, households are never outside of those environmental pressures, and the use of the landscape creates environmental pressures as well due to patch degradation. Pastoralists have long been blamed for environmental degradation from overgrazing [33].The "tragedy of the commons" theory states that unmonitored common-pool resources, as is the case in Mongolia with individual ownership of herds, but not land, leads to irresponsible usage of resources.However, critics of this theory point to various formal and informal social adaptations that oversee and regulate resource [34].The same cooperation and sharing networks modeled here may parallel the social networks ensuring sustainable resource utilization through traditional ecological knowledge. The problem of common-pool resources is evident in the model.When agents land on patches they extract the resources from those patches, and must wait multiple timesteps until those patches regenerate.It is possible that all winter patches in one area could become used during one timestep, causing future households to have no opportunities for productive patches.If agents land on dead patches they are charged energy.Once agents have fewer than 10 energy stored, those agents with a sharing strategy must rely on other agents in their network for survival.In this way we can see how agents react to a simulated tragedy of the commons.Once resources are over-exploited in an area, households must call upon their networks for help.As we see in this simulation, agents are doubly burdened by both simulated dzuds and by simulated resource over-use.Those agents that are able to rely on their greater social network fare better overall than those agents with no social network when both climatic and overuse pressures affect the environment. One final issue addressed by this model is the poor resolution of the archaeological record.While research is ongoing in household studies in Mongolia (e.g., [12]) most studies in Mongolia have focused on monumental archaeology.This is coupled with poor resolution of household archaeology (centimeters of deposition equating to centuries of occupation).Consequently, our understanding of the past can be blurred.Simulations, therefore, help us to address these gaps in our knowledge. Notably missing from our study is a goodness-of-fit exercise between the model and the real settlement patterns [1,35].This is due in part to there not being many complete archaeological datasets in the region to do goodness-of-fit tests against yet.Consequently, we must make do and use models as a way to inform our understanding of the limited archaeological information available at this time. This model, while not meant as a reproduction of reality, presents a plausible scenario based on developed theory and hopes to address key questions of how semi-nomadic Mongolians address local weather events, such as drought and heavy winters.While this model is highly simplified, it presents a plausible suite of directions that people in this highly unpredictable environment could face.Therefore the outcome of our study can be used to make some conclusions of a much more complicated system. Conclusions The mobility of Mongolia's pastoralists presents a unique case rather different than the settled, Ancestral Pueblos investigated by Hegmon [6,7].Household units, which are moving frequently anyway, can fission and fusion without large disruptions to the social, economic or political order.Rather than reaching a breaking point, temporary solutions can mitigate risk and catastrophe, followed by a return to the normal order. So which of the above cooperation strategies works best for Mongolia?This is a tricky question to answer with a single straightforward answer.All of Mongolia is hit by dzuds, but they do not impact different regions of the country equally; one area will be more susceptible to them than others for various natural and socio-cultural reasons.For instance, the weather in southern Mongolia's Gobi Desert is quite different than that of northern Mongolia's Taiga-Mountain-Steppe ecotones.Therefore, which strategy is most beneficial may vary geographically as well as temporally.Additionally, the availability of other risk-mitigating adaptations is different by region.There may be many more types of wild resources available in the northern ecotones than in the more homogenous steppe or desert zones in central and southern Mongolia.In regions where it is more difficult to fall back on wild resources, this may place much more importance on social or kin networks to mitigate risk.This might be seen archaeologically in Mongolia by looking at facets of the ritual landscape as a reflection of the strength of social and kin networks [12]. Ger Grouper is a very simplified model.However, this "wrong" model (sensu [29]) is useful in that it helps us to understand how individuals might react to catastrophic events.We began with a highly simplified model to examine how variables interact with one another, so that in future we can truly examine the effects of variables in a realistic setting.Future development of this model will include bringing real world variables into the model.The rates of environmental catastrophes (e.g., dzuds and droughts) can be reconstructed using historical weather data which can then be added to create a more realistic "patchy" element to the model.In addition, realistic GIS landscapes can be created based on real locations within Mongolia and the surrounding regions.As more detailed archaeological and paleoenvironmental data become available, the parameters of the model will improve.The results from multiple regions can then be compared, illuminating any differences in socially adaptive risk management responses due to environmental variation.The Ger Grouper model was designed to work at a landscape-scale compatible with the annual seasonal rounds of mobile pastoralists in Mongolia. Agent-based-modeling, when implemented at this scale, will allow for explicit connections between computer-aided models and archaeological project design. Figure 1 . Figure 1. Figure showing how each individual strategy responds to environmental pressures when no other lineage is present.Each tile is as follows: Columns marked A correspond to the 100% sharing strategy.Columns marked B correspond to the 50% sharing strategy.Columns marked C correspond to the 25% sharing strategy.Columns marked D correspond to 0% sharing strategy.Row 1 is 0% winter patch variability.Row 2 is 5% winter patch variability.Row 3 is 10% winter patch variability.Row 4 is 15% winter patch variability.Row 5 is 20% winter patch variability.Row 6 is 25% winter patch variability.Row 7 is 30% winter patch variability.Thus, tile c3 is the 25% sharing strategy under 10% patch variability.Y-axis goes from 0 to 150 households, X axis goes from 0 to 500 ticks.Red-dotted line corresponds to the standard deviation from the mean, while the gray lines show each strategy.Black central line corresponds to the mean of each strategy. Figure 2 . Figure 2. Figure showing how each individual strategy responds to environmental pressures when all other lineages are present.Each tile is as follows: Columns marked A correspond to the 100% sharing strategy.Columns marked B correspond to the 50% sharing strategy.Columns marked C correspond to the 25% sharing strategy.Columns marked D correspond to 0% sharing strategy.Row 1 is 0% winter patch variability.Row 2 is 5% winter patch variability.Row 3 is 10% winter patch variability.Row 4 is 15% winter patch variability.Row 5 is 20% winter patch variability.Row 6 is 25% winter patch variability.Row 7 is 30% winter patch variability.Thus, tile c3 is the 25% sharing strategy under 10% patch variability.Y-axis goes from 0 to 150 households, X-axis goes from 0 to 500 ticks.Red-dotted line corresponds to the standard deviation from the mean, while the gray lines show each strategy.Black central line corresponds to the mean of each strategy. Figure 3 . Figure 3. Means of each of the strategies for 0% patch variability.Means correspond to Row 1 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 4 . Figure 4. Means of each of the strategies for 5% patch variability.Means correspond to Row 2 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 5 . Figure 5. Means of each of the strategies for 10% patch variability.Means correspond to Row 3 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 6 . Figure 6.Means of each of the strategies for 15% patch variability.Means correspond to Row 4 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 7 . Figure 7. Means of each of the strategies for 20% patch variability.Means correspond to Row 5 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 8 . Figure 8. Means of each of the strategies for 25% patch variability.Means correspond to Row 6 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Figure 9 . Figure 9. Means of each of the strategies for 30% patch variability.Means correspond to Row 7 of Figure 2.This figure reflects those runs when all strategies were present in the simulation. Table 4 . Results from Kolomgorov-Smirnov analysis on multiple present lineage values.Data is simplified into five time slices: 100, 200, 300, 400 and 500 ticks.K-S values that show significance above a p-value of 0.05 are highlighted blue and show "Sig" in the significance column.This means that there is a large difference in the mean values for the lines during that tick for those variability values.Of note are the 11 K-S values that show as significant between Strategy A (all share) and Strategy D (no share).
2015-09-18T23:22:04.000Z
2015-03-03T00:00:00.000
{ "year": 2015, "sha1": "a8ab4318db9e884f301b82b5f3a82734640cf36e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/4/1/157/pdf?version=1425387005", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "a8ab4318db9e884f301b82b5f3a82734640cf36e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
254058985
pes2o/s2orc
v3-fos-license
An existence theorem for Brakke flow with fixed boundary conditions Consider an arbitrary closed, countably n-rectifiable set in a strictly convex (n+1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(n+1)$$\end{document}-dimensional domain, and suppose that the set has finite n-dimensional Hausdorff measure and the complement is not connected. Starting from this given set, we show that there exists a non-trivial Brakke flow with fixed boundary data for all times. As t↑∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t \uparrow \infty $$\end{document}, the flow sequentially converges to non-trivial solutions of Plateau’s problem in the setting of stationary varifolds. named author in [20] by reworking [2] thoroughly. The major challenge of the present work is to devise a modification to the approximation scheme in [20] which preserves the boundary data. Though somewhat technical, in order to clarify the setting of the problem at this point, we state the assumptions on the initial surface 0 and the domain U hosting its evolution. Their validity will be assumed throughout the paper. Assumption 1.1 Integers n ≥ 1 and N ≥ 2 are fixed, and clos A denotes the topological closure of A in R n+1 . Since N ≥ 2, we implicitly assume that U \ 0 is not connected. When n = 1, 0 could be for instance a union of Lipschitz curves joined at junctions, with "labels" from 1 to N being assigned to each connected component of U \ 0 . If one defines F i := (clos E 0,i ) \ (U ∪ ∂ 0 ) for i = 1, . . . , N , one can check that each F i is relatively open in ∂U , F 1 , . . . , F N are mutually disjoint, and The assumption (A4) is equivalent to the requirement that each x ∈ ∂ 0 is in ∂ F i 1 ∩ ∂ F i 2 for some indices i 1 = i 2 . The main result of the present paper can then be roughly stated as follows. For all t > 0, (t) remains within the convex hull of 0 ∪ ∂ 0 . More precisely, { (t)} t≥0 is a MCF in the sense that (t) coincides with the slice, at time t, of the space-time support of a Brakke flow {V t } t≥0 starting from 0 . The method adopted to produce the evolving generalized surfaces (t) actually gives us more. Indeed, we show the existence of N families {E i (t)} t≥0 (i = 1, . . . , N ) of evolving open sets such that E i (0) = E 0,i for every i, and (t) = U \ ∪ N i=1 E i (t) for all t ≥ 0. At each time t ≥ 0, the sets E 1 (t), . . . , E N (t) are mutually disjoint and form a partition of U . Moreover, for each fixed i the Lebesgue measure of E i (t) is a continuous function of time, so that the evolving (t) do not exhibit arbitrary instantaneous loss of mass. See Theorems 2.2 and 2.3 for the full statement. It is reasonable to expect that the flow (t) converges, as t → ∞, to a minimal surface in U with boundary ∂ 0 . We are not able to prove such a result in full generality; nonetheless, we can show the following Theorem B There exists a sequence of times {t k } ∞ k=1 with lim k→∞ t k = ∞ such that the corresponding varifolds V k := V t k converge to a stationary integral varifold V ∞ in U such that (clos (spt V ∞ )) \ U = ∂ 0 . See Corollary 2.4 for a precise statement. The limit V ∞ is a solution to Plateau's problem with boundary ∂ 0 , in the sense that it has the prescribed boundary in the topological sense specified above and it is minimal in the sense of varifolds. We warn the reader that V ∞ may not be area-minimizing. Furthermore, the flow may converge to different limit varifolds along different diverging sequences of times in all cases when uniqueness of a minimal surface with the prescribed boundary is not guaranteed. The possibility to use Brakke flow in order to select solutions to Plateau's problem in classes of varifolds seems an interesting byproduct of our theory. See Sect. 7 for further discussion on these points. Next, we discuss closely related results. While there are several works on the global-intime existence of MCF, there are relatively few results on the existence of MCF with fixed boundary conditions. When 0 is a smooth graph over a bounded domain in R n , globalin-time existence follows from the classical work of Lieberman [25]. Furthermore, under the assumption that is mean convex, convergence of the flow to the unique solution to the minimal surfaces equation in with the prescribed boundary was established by Huisken in [16]; see also the subsequent generalizations to the Riemannian setting in [31,34]. The case of network flows with fixed endpoints and a single triple junction was extensively studied in [28,30]. For other configurations and related works on the network flows, see the survey paper [29] and references therein. In the case when N = 2 (which does not allow triple junctions in general), a powerful approach is the level set method [4,10]. Existence and uniqueness in this setting were established in [35], and the asymptotic limit as t → ∞ was studied in [18]. Recently, White [39] proved the existence of a Brakke flow with prescribed smooth boundary in the sense of integral flat chains mod (2). The proof uses the elliptic regularization scheme discovered by Ilmanen [17], which allows one to obtain a Brakke flow with additional good regularity and compactness properties; see also [32] for an application of elliptic regularization within the framework of flat chains with coefficients in suitable finite groups to the long-time existence and short-time regularity of unconstrained MCF starting from a general surface cluster. Observe that the homological constraint used by White prevents the flow to develop interior junction-type singularities of odd order (namely, junctions which are locally diffeomorphic to the union of an odd number of half-hyperplanes), because these singularities are necessarily boundary points mod (2). As a consequence, the flows obtained in [39] may differ greatly from those produced in the present paper. This is not surprising, as solutions to Brakke flow may be highly non-unique. A complete characterization of the topological changes that the evolving surfaces can undergo with either of the two approaches is, in fact, an interesting open question. It is worth noticing that analogous generic nonuniqueness holds true also for Plateau's problem: in that context, different definitions of the key words surfaces, area, spanning in its formulation lead to solutions with dramatically different regularity properties, thus making each model a better or worse predictor of the geometric complexity of physical soap films; see e.g. the survey papers [6,15] and the references therein, as well as the more recent works [7][8][9][22][23][24]27]. It is then interesting and natural to investigate different formulations for Brakke flow as well. Basic notation The ambient space we will be working in is Euclidean space R n+1 . We write R + for [0, ∞). For A ⊂ R n+1 , clos A (or A) is the topological closure of A in R n+1 (and not in U ), int A is the set of interior points of A and conv A is the convex hull of A. The standard Euclidean inner product between vectors in R n+1 is denoted x · y, and |x| := √ x · x. If L, S ∈ L (R n+1 ; R n+1 ) are linear operators in R n+1 , their (Hilbert-Schmidt) inner product is L · S := trace(L T • S), where L T is the transpose of L and • denotes composition. The corresponding (Euclidean) norm in L (R n+1 ; R n+1 ) is then |L| := √ L · L, whereas the operator norm in L (R n+1 ; R n+1 ) is L := sup |L(x)| : x ∈ R n+1 with |x| ≤ 1 . If u, v ∈ R n+1 then u ⊗ v ∈ L (R n+1 ; R n+1 ) is defined by (u ⊗ v)(x) := (x · v) u, so that u ⊗ v = |u| |v|. The symbol U r (x) (resp. B r (x)) denotes the open (resp. closed) ball in R n+1 centered at x and having radius r > 0. The Lebesgue measure of a set A ⊂ R n+1 is denoted L n+1 (A) or |A|. If 1 ≤ k ≤ n + 1 is an integer, U k r (x) denotes the open ball with center x and radius r in R k . We will set ω k := L k (U k 1 (0)). The symbol H k denotes the k-dimensional Hausdorff measure in R n+1 , so that H n+1 and L n+1 coincide as measures. A Radon measure μ in U ⊂ R n+1 is always also regarded as a linear functional on the space C c (U ) of continuous and compactly supported functions on U , with the pairing denoted μ(φ) for φ ∈ C c (U ). The restriction of μ to a Borel set A is denoted μ A , so that (μ A )(E) := μ(A ∩ E) for any E ⊂ U . The support of μ is denoted spt μ, and it is the relatively closed subset of U defined by spt μ := {x ∈ U : μ(B r (x)) > 0 for every r > 0} . The upper and lower k-dimensional densities of a Radon measure μ at x ∈ U are θ * k (μ, x) := lim sup respectively. If θ * k (μ, x) = θ k * (μ, x) then the common value is denoted θ k (μ, x), and is called the k-dimensional density of μ at x. For 1 ≤ p ≤ ∞, the space of p-integrable (resp. locally p-integrable) functions with respect to μ is denoted L p (μ) (resp. L p loc (μ)). For a set E ⊂ U , χ E is the characteristic function of E. If E is a set of finite perimeter in U , then ∇χ E is the associated Gauss-Green measure in U , and its total variation ∇χ E in U is the perimeter measure; by De Giorgi's structure theorem, ∇χ E = H n ∂ * E , where ∂ * E is the reduced boundary of E in U . Varifolds The symbol G(n + 1, k) will denote the Grassmannian of (unoriented) k-dimensional linear planes in R n+1 . Given S ∈ G(n + 1, k), we shall often identify S with the orthogonal projection operator onto it. The symbol V k (U ) will denote the space of k-dimensional varifolds in U , namely the space of Radon measures on G k (U ) := U × G(n + 1, k) (see [1,33] for a comprehensive treatment of varifolds). To any given V ∈ V k (U ) one associates a Radon measure V on U , called the weight of V , and defined by projecting V onto the first factor in G k (U ), explicitly: for every φ ∈ C c (U ) . A set ⊂ R n+1 is countably k-rectifiable if it can be covered by countably many Lipschitz images of R k into R n+1 up to a H k -negligible set. We say that is (locally) H k -rectifiable if it is H k -measurable, countably k-rectifiable, and H k ( ) is (locally) finite. If ⊂ U is locally H k -rectifiable, and θ ∈ L 1 loc (H k ) is a positive function on , then there is a k-varifold canonically associated to the pair ( , θ ), namely the varifold var( , θ ) defined by where T x denotes the approximate tangent plane to at x, which exists H k -a.e. on . Any varifold V ∈ V k (U ) admitting a representation as in (2.1) is said to be rectifiable, and the space of rectifiable k-varifolds in U is denoted by RV k (U ). If V = var( , θ ) is rectifiable and θ(x) is an integer at H k -a.e. x ∈ , then we say that V is an integral k-dimensional varifold in U : the corresponding space is denoted IV k (U ). First variation of a varifold If V ∈ V k (U ) and f : U → U is C 1 and proper, then we let f V ∈ V k (U ) denote the push-forward of V through f . Recall that the weight of f V is given by where is the Jacobian of f along S ∈ G(n + 1, k). Given a varifold V ∈ V k (U ) and a vector field g ∈ C 1 c (U ; R n+1 ), the first variation of V in the direction of g is the quantity where t (·) = (t, ·) is any one-parameter family of diffeomorphisms of U defined for sufficiently small |t| such that 0 = id U and ∂ t (0, ·) = g(·). TheŨ is chosen so that closŨ ⊂ U is compact and spt g ⊂Ũ , and the definition of (2.3) does not depend on the choice ofŨ . It is well known that δV is a linear and continuous functional on C 1 c (U ; R n+1 ), and in fact that where, after identifying S ∈ G(n + 1, k) with the orthogonal projection operator R n+1 → S, If δV can be extended to a linear and continuous functional on C c (U ; R n+1 ), we say that V has bounded first variation in U . In this case, δV is naturally associated with a unique R n+1 -valued measure on U by means of the Riesz representation theorem. If such a measure is absolutely continuous with respect to the weight V , then there exists a V -measurable and locally V -integrable vector field h(·, V ) such that by the Lebesgue-Radon-Nikodým differentiation theorem. The vector field h(·, V ) is called the generalized mean curvature vector of V . In particular, if δV (g) = 0 for all g ∈ C 1 c (U ; R n+1 ), V is called stationary, and this is equivalent to h(·, V ) = 0 V -almost everywhere. For any V ∈ IV k (U ) with bounded first variation, Brakke's perpendicularity theorem [2,Chapter 5] says that Here, S ⊥ is the projection onto the orthogonal complement of S in R n+1 . This means that the generalized mean curvature vector is perpendicular to the approximate tangent plane almost everywhere. Other than the first variation δV discussed above, we shall also use a weighted first variation, defined as follows. For where t denotes the one-parameter family of diffeomorphisms of U induced by g as above. Proceeding as in the derivation of (2.4), one then obtains the expression If δV has generalized mean curvature h(·, V ), then we may use (2.5) in (2.9) to obtain (2.10) The definition of Brakke flow requires considering weighted first variations in the direction of the mean curvature. Suppose V ∈ IV k (U ), δV is locally bounded and absolutely continuous with respect to V and h(·, V ) is locally square-integrable with respect to V . In this case, it is natural from the expression (2.10) to define for φ ∈ C 1 Observe that here we have used (2.6) in order to replace the term h( Brakke flow To motivate a weak formulation of the MCF, note that a smooth family of k-dimensional surfaces { (t)} t≥0 in U is a MCF if and only if the following inequality holds true for all −φ |h(·, (t))| 2 + ∇φ · h(·, (t)) + ∂φ ∂t dH k . (2.12) In fact, the "only if" part holds with equality in place of inequality. For a more comprehensive treatment of the Brakke flow, see [38,Chapter 2]. Formally, if ∂ (t) ⊂ ∂U is fixed in time, with φ = 1, we also obtain |h(x, (t))| 2 dH k (x) , (2.13) which states the well-known fact that the L 2 -norm of the mean curvature represents the dissipation of area along the MCF. Motivated by (2.12) and (2.13), and for the purposes of this paper, we give the following definition. In this paper, we are interested in the n-dimensional Brakke flow in particular. Formally, by integrating (2.13) from 0 to T , we obtain the analogue of (2.14). By integrating (2.12) from t 1 to t 2 , we also obtain the analogue of (2.15) via the expression (2.11). We recall that the closure is taken with respect to the topology of R n+1 while the support of V t is in U . Thus (e) geometrically means that "the boundary of V t (or V t ) is ". Main results The main existence theorem of a Brakke flow with fixed boundary is the following. Since we are assuming that ∂ 0 = ∅, we have V t = 0 for all t > 0. If the union of the reduced boundaries of the initial partition in U coincides with 0 modulo H n -negligible sets (note that the assumptions (A2) and (A3) in Assumption 1.1 imply that 0 = U ∩ N i=1 ∂ E 0,i ), then the claim is that the initial condition is satisfied continuously as measures. Otherwise, an instantaneous loss of measure may occur at t = 0. As far as the regularity is concerned, under the additional assumption that {V t } t>0 is a unit density flow, partial regularity theorems of [2,19,37] show that V t is a smooth MCF for a.e. time and a.e. point in space, just like [20], see [20,Theorem 3.6] for the precise statement. No claim of the uniqueness is made here, but the next Theorem 2.3 gives an additional structure to V t in the form of "moving partitions" starting from E 0,1 , . . . , E 0,N . Then, ∀t > 0, we have The claims (1) is an L n+1 -partition of U , and that (t) has empty interior in particular. The claim (5) is an expected property for the MCF, and, by (11), spt V t is also in the same convex hull. (7) says that (t) has the fixed boundary ∂ 0 . In general, the reduced boundary of the partition and V t may not match, but the latter is bounded from below by the former as in (8). By (10), the Lebesgue measure of each E i (t) changes continuously in time, so that arbitrary sudden loss of measure of V t is not allowed. The statement in (11) says that the time-slice of the support of μ at time t contains the support of V t and is equal to the topological boundary of the moving partition. As a corollary of the above, we deduce the following. Corollary 2.4 There exist a sequence {t k } ∞ k=1 with lim k→∞ t k = ∞ and a varifold V ∈ IV n (U ) such that V t k → V in the sense of varifolds. The varifold V is stationary. Furthermore, there is a mutually disjoint family The varifold V in Corollary 2.4 is a solution to Plateau's problem in U in the class of stationary varifolds satisfying the topological constraint (clos (spt V ))\U = ∂ 0 . This is an interesting byproduct of our construction, above all considering that ∂ 0 enjoys in general rather poor regularity (in particular, it may have infinite (n − 1)-dimensional Hausdorff measure, and also it may not be countably (n − 1)-rectifiable). Even though the topological boundary condition specified above seems natural in this setting, other notions of spanning may be adopted: for instance, in Proposition 7.4 we show that a strong homotopic spanning condition in the sense of [7,14] is preserved along the flow and in the limit if it is satisfied at the initial time t = 0. We postpone further discussion and questions concerning the application to Plateau's problem to Sect. 7. General strategy and structure of the paper The general idea behind the proof of Theorems 2.2 and 2.3 is to suitably modify the time-discrete approximation scheme introduced in [2,20]. There, one constructs a timeparametrized flow of open partitions which is piecewise constant in time. We will call epoch any time interval during which the approximating flow is constant. The open partition at a given epoch is constructed from the open partition at the previous epoch by applying two operations, which we call steps. The first step is a small Lipschitz deformation of partitions with the effect of "regularizing singularities" by "locally minimizing the area of the boundary of partitions" at a small scale. This deformation is defined in such a way that, if the boundary of partitions is regular (relative to a certain length scale), then the deformation reduces to the identity. The second step consists of flowing the boundary of partitions by a suitably defined "approximate mean curvature vector". The latter is computed by smoothing the surface measures via convolution with a localized heat kernel. Note that, typically, the boundary of open partitions has bounded n-dimensional measure, but the unit-density varifold associated to it may not have bounded first variation. In [20], a time-discrete approximate MCF is obtained by alternating these two steps, epoch after epoch. In the present work, we need to fix the boundary ∂ 0 . The rough idea to achieve this is to perform an "exponentially small" truncation of the approximate mean curvature vector near ∂ 0 , so that the boundary cannot move in the "polynomial time scale" defining an epoch with respect to a certain length scale. We also need to make sure that the time-discrete movement does not push the boundary of open partitions to the outside of U . To prevent this, in addition to the two steps (Lipschitz deformation and motion by smoothed and truncated mean curvature vector), we add another "retraction to U " step to be performed in each epoch. All these operations have to come with suitable estimates on the surface measures, in order to have convergence of the approximating flow when we let the epoch time scale approach zero. The final goal is to show that this limit flow is indeed a Brakke flow with fixed boundary ∂ 0 as in Definition 2.1. The rest of the paper is organized as follows. Section 3 lays the foundations to the technical construction of the approximate flow by proving the relevant estimates to be used in the Lipschitz deformation and flow by smoothed mean curvature steps, and by defining the boundary truncation of the mean curvature. Both the discrete approximate flow and its "vanishing epoch" limit are constructed in Sect. 4. In Sect. 5 we show that the one-parameter family of measures obtained in the previous section satisfies conditions (a) to (d) in Definition 2.1. The boundary condition (e) is, instead, proved in Sect. 6, which therefore also contains the proofs of Theorems 2.2 and 2.3. Finally, Sect. 7 is dedicated to the limit t → ∞: hence, it contains the proof of Corollary 2.4, as well as a discussion of related results and open questions concerning the application of our construction to Plateau's problem. Preliminaries In this section we will collect the preliminary results that will play a pivotal role in the construction of the time-discrete approximate flows. Some of the results are straightforward adaptations of the corresponding ones in [20]: when that is the case, we shall omit the proofs, and refer the reader to that paper. Classes of test functions and vector fields Define, for every j ∈ N, the classes A j and B j as follows: The properties of functions φ ∈ A j and vector fields g ∈ B j are precisely as in [20,Lemma 4.6,Lemma 4.7], and we record them in the following lemma for future reference. Lemma 3.1 Let x, y ∈ R n+1 and j ∈ N. For every φ ∈ A j , the following properties hold: Also, for every g ∈ B j : Open partitions and admissible functions LetŨ ⊂ R n+1 be a bounded open set. Later,Ũ will be an open set which is very close to U in Assumption 1.1. The set of all open partitions ofŨ of N elements will be denoted OP N (Ũ ). Note that some of the E i may be empty. Condition (b) implies that and thus that N i=1 ∂ E i is H n -rectifiable and each E i is in fact an open set with finite perimeter inŨ . By De Giorgi's structure theorem, the reduced boundary ∂ * E i is H n -rectifiable: nonetheless, the reduced boundary ∂ * E i may not coincide in general with the topological boundary ∂ E i , which makes condition (c) not redundant. We keep the following for later use. The proof is straightforward. Notation Given E ∈ OP N (Ũ ), we will set Here, to avoid some possible confusion, we emphasize that we want to consider ∂E as a varifold on R n+1 when we construct approximate MCF. On the other hand, note that we still consider the relative topology ofŨ , as ∂ E i ⊂Ũ here. In particular, writing = ∪ N i=1 ∂ E i , we have ∂E = H n , and where T x ∈ G(n + 1, n) is the approximate tangent plane to at x, which exists and is unique at H n -a.e. x ∈ because of Definition 3.2(c). Definition 3.4 Given be an open partition ofŨ in N elements, C ⊂⊂Ũ , and let f be E-admissible in C. If we defineẼ : Proof We check thatẼ satisfies properties (a)-(c) in Definition 3.2. By Definition 3.4(a) and (b), it is clear thatẼ 1 , . . . ,Ẽ N are open and mutually disjoint subsets ofŨ , which gives (a). In order to prove (b), we use Definition 3.4(c) and the area formula to compute: where we have used Definition 3.2(b) and (3.7). This also showsŨ Since any subset of a countably n-rectifiable set is countably n-rectifiable, also N i=1 ∂Ẽ i is countably n-rectifiable. then the open partitionẼ ∈ OP N (Ũ ) will be denoted f E. Area reducing Lipschitz deformations , j ∈ N and a closed set C ⊂⊂Ũ , define E(E, C, j) to be the set of all E-admissible functions f in C such that: is the symmetric difference of the sets E and F; and ∂E is the weight of the multiplicity one varifold associated to the open partition E. The set E(E, C, j) is not empty, as it contains the identity map. Definition 3.7 Given E ∈ OP N (Ũ ) and j, and given a closed set C ⊂⊂Ũ , we define Observe that it always holds j ∂E (C) ≤ 0, since the identity map f (x) = x belongs to E(E, C, j). The quantity j ∂E (C) measures the extent to which ∂E can be reduced by acting with area reducing Lipschitz deformations in C. Smoothing of varifolds and first variations We let ψ ∈ C ∞ (R n+1 ) be a radially symmetric function such that and we define, for each ε ∈ (0, 1), where the constant c(ε) is chosen in such a way that The function ε will be adopted as a convolution kernel for the definition of the smoothing of a varifold. We record the properties of ε in the following lemma (cf. [20,Lemma 4.13]). Lemma 3.8 There exists a constant c = c(n) such that, for ε ∈ (0, 1), we have: Next, we use the convolution kernel ε in order to define the smoothing of a varifold and its first variation. Recall that, given a Radon measure μ on R n+1 , the smoothing of μ by means of the kernel ε is defined to be the Radon measure ε * μ given by The definition of smoothing of a varifold V is the equivalent of (3.15) when regarding V as a Radon measure on G n (R n+1 ), keeping in mind that the operator ( ε * ) acts on a test function ϕ ∈ C c (G n (R n+1 )) by convolving only the space variable. Explicitly, we give the following definition. Definition 3.9 Given Observe that, given a Radon measure μ on R n+1 , one can identify the measure ε * μ with a C ∞ function by means of the Hilbert space structure of These considerations suggest the following definition for the smoothing of the first variation of a varifold. Definition 3.10 Given in such a way that Proof The identities (3.19) and (3.20) are proved in [20,Lemma 4.16]. Concerning (3.21), we observe that for any Taking the supremum among all functions ϕ ∈ C c (G n (R n+1 )) with ϕ 0 ≤ 1 completes the proof. Smoothed mean curvature vector (3.22) We will often make use of [20, Lemma 5.1] with ≡ 1 (and c 1 = 0). For the reader's convenience, we provide here the statement. The cut-off functions Á j In this subsection we construct the cut-off functions which will later be used to truncate the smoothed mean curvature vector in order to produce time-discrete approximate flows which almost preserve the boundary ∂ 0 . Given a set E ⊂ R n+1 and s > 0, (E) s denotes the s-neighborhood of E, namely the open set We shall also adopt the convention that (E) 0 = E. Let U and 0 be as in Assumption 1.1. Definition 3.14 We define for j ∈ N: Observe that D j is not empty for all j sufficiently large (depending on U ). Also, we define the sets Next, we prove (2). Let x ∈K j , so that there exists z ∈ 0 \ D j such that |x − z| < 2 j − 1 /4 . If y ∈ B ρ j (x), then |y − z| < 3 j − 1 /4 by the definition of ρ j , and thus, for j suitably large, Hence, by property (a) of ψ in Definition 3.15: In particular, up to taking larger values of j, we see that Finally, we prove (3). To this aim, we compute the gradient of η j : at any point x, we have Using that t = ψ(t) for 0 ≤ t ≤ 1/2, ψ (t) = 0 for t ≥ 3/2, and that |t| = t ≤ 2 ψ(t) for t ∈ [1/2, 3/2], together with the fact that |ψ | ≤ 1, we can estimate where we have used that ∇d j (x) = φ ρ j * ∇d j (x), so that In particular, |∇η j | ≤ j 3 /4 η j as soon as j ≥ 4. Next, we compute the Hessian of η j from which we estimate Now, observe that Hence, recalling that ρ j = j − 1 /4 , we conclude the estimate for a constant C depending only on n. Thus, we conclude η j ∈ A j 3 /4 for j sufficiently large. L 2 approximations In this subsection, we collect a few estimates of the error terms deriving from working with smoothed first variations and smoothed mean curvature vectors. They will be critically important to deduce the convergence of the discrete approximation algorithm. The first estimate is a modification of [20,Proposition 5.3]. We let η j be the cut-off function as in Definition 3.15, corresponding to U and 0 , and we will suppose that j ≥ J (n), in such a way that the conclusions of Lemma 3.16 are satisfied. Proposition 3.17 For every M > 0, there exists ε 2 ∈ (0, 1) depending only on n and M such that the following holds. For Given the validity of (3.18), we see that (3.32) measures the deviation from the identity (2.5). The difference with [20,Proposition 5.3] is that there, in place of η j g (left-hand side of (3.32)) and η j (right-hand side of (3.32)), we have g and , respectively. We note that g η j : using these, the modification of the proof is straightforward, and thus we omit the details. Proposition 3.18 There exists a constant ε 3 ∈ (0, 1) depending only on n and M with the following property. Given Note that formula (3.33) estimates the deviation from the identity (2.5) with g = h(·, V ). The next statement is [20,Proposition 5.5]. The proof is a straightforward modification, using (3.32). Proposition 3.19 For every M > 0, there exists ε 4 ∈ (0, 1) depending only on n and M with the following property. For (3.35) Curvature of limit varifolds The next Proposition 3.20 corresponds to [20,Proposition 5.6] when there is no boundary. Proof By (1), we may choose a (not relabeled) subsequence V j converging to V as varifolds on R n+1 , and we may assume that the integrals in (2) for this subsequence converge to the lim inf of the original sequence. Fix g ∈ C 2 c (U ; R n+1 ). For all sufficiently large , we have g η j = g due to Lemma 3.16 (1), (3.27) and (3.26). Moreover, we may assume that g η j ∈ B j due to Lemma 3.16 (3). Then, by (3.35), (2) and (3), we have Since η j ∈ A j in particular, by the Cauchy-Schartz inequality and (3.34), we have This shows that δV is absolutely continuous with respect to V on U and h(·, V ) satisfies Given φ ∈ C 2 c (U ; R + ) (C c case is by approximation), let i ∈ N be arbitrary and consider φ := φ + i −1 . For all sufficiently large , we have g η j φ ∈ B j and η j φ ∈ A j (we may assume |φ| < 1 without loss of generality). Thus the same computation above with g η j φ yields We let then i → ∞ in (3.40) to replaceφ by φ, and finally we approximate h(·, V ) by g to obtain (3.36). Motion by smoothed mean curvature with boundary damping We aim at proving the following proposition: it contains the perturbation estimates for a varifold V which is moved by a vector field consisting of a boundary damping of its smoothed mean curvature for a time t. Proposition 3.21 There exists ε 5 ∈ (0, 1), depending only on n, M and U such that the following holds. Suppose that: Then, for every φ ∈ A j we have the following estimates. Proof. We want to estimate the following quantity which can be written as with Choose ε 5 ≤ min{ε 1 , ε 3 }, so that the conclusions of Lemma 3.13 and Proposition 3.18 hold with ε ∈ (0, ε 5 ). In order to estimate the size of the various integrands appearing in the definition of I 1 , I 2 and I 3 , we first observe that, by (3.23) and our assumption on t, (3.45) Furthermore, using (3.23), (3.24), (3.31), and the fact that η j ∈ A j we obtain Since φ ∈ A j , we can use the results of Lemma 3.1 to estimate: for any orthonormal basis {v 1 , . . . , v n } of S, we can Taylor expand the tangential Jacobian and deduce the estimates modulo choosing a smaller value of ε if necessary. Putting all together, we can finally conclude the proof of (3.41): In order to prove (3.42), we use (3.41) with φ(x) ≡ 1, which implies that On the other hand, since η j ∈ A j we can apply (3.33) to further estimate so that (3.42) follows by choosing ε so small that 1 − ε 1 /4 ≥ 1/4. Finally, we turn to the proof of (3.43) and (3.44). In order to simplify the notation, let us writeV instead of f V . Using the same strategy as in [20, Proof of Proposition 5.7], we can estimate The first term can be estimated by observing that for some pointŷ on the segment and using that because of (3.49), so that Concerning the second term in the sum, we can use (3.49) again to estimate Putting the two estimates together, we see that (3.54) Analogous calculations lead to The rough estimates also give The estimates (3.54), (3.55), and (3.56) immediately yield as well as (3.58) Observe that, since spt V ⊂ (U ) 1 , the right-hand side of estimates (3.57) and (3.58) is zero whenever dist(x, clos(U )) > 3. Hence, (3.58) and the monotonicity of the mass by possibly choosing a smaller value of ε (depending on U and M). This proves (3.44). Finally, we prove (3.43). By (3.22), (3.57), and the properties of ε , we deduce that for l = 0, 1, 2. We can conclude using (3.59), (3.45)-(3.49) and suitable interpolations that: The construction of the approximate flows Suppose U and 0 are as in Assumption 1.1. Together with the sets D j , K j ,K j ,K j introduced in Definition 3.14, for k = 0, 1, . . ., we set Once again, here the indices j and k are chosen in such a way that the corresponding sets D j,k are non-empty proper subsets of U . Observe that we have the elementary inclusions D j,0 ⊂ D j,k ⊂ D j,k for every 0 ≤ k ≤ k , and that D j ⊂ D j,k for every k. Before proceeding with the construction of the time-discrete approximate flows, we need to introduce a suitable new class of test functions. Since U is an open and bounded convex domain with boundary ∂U of class C 2 , there exists a neighborhood (∂U ) s 0 such that, denoting is monotone non decreasing for t such that (4.1) The following proposition and its proof contain the constructive algorithm which produces the time-discrete approximations of our Brakke flow with fixed boundary. , and 0 be as in Assumption 1.1. There exists a positive integer J = J (n) with the following property. For every j ≥ J (n), there exist ε j ∈ (0, 1) satisfying (3.31), p j ∈ N, and, for every 2) and such that, setting t j := 2 − p j , and defining j,k := U j,k \ N i=1 E j,k,i , the following holds true: Moreover, we have: for every k ∈ {1, . . . , j 2 p j } and φ ∈ A j ∩ R j . The set A j,k is a relatively open subset of ∂(D j,k−1 ) j −10 . Let A j,k,l ⊂ A j,k be any of the (at most countably many) connected components of A j,k and define Ret j,k,l := {r s (x) : x ∈ A j,k,l , s ∈ (0, 1)}. Proof The claim follows directly from Lemma 4.3. Lemma 4.5 implies that for each l there exists some i(l) ∈ {1, . . . , N } such that E j,k,i(l) contains A j,k,l ∪ (∂ A j,k,l ) j −10 . For each index l, let i(l) be this correspondence. We define In other words, when Proof Note that˜ j,k ∩ Ret j,k \ D j,k−1 = ∅ since ∂Ret j,k \ D j,k−1 is contained in some open partition by Lemma 4.5 and˜ j,k ∩ Ret j,k = ∅. If there exists x ∈˜ j,k \ (K j ∪ D j,k−1 ), then x / ∈ Ret j,k and thus x ∈ j,k \ (K j ∪ D j,k−1 ) = j,k−1 \ (K j ∪ D j,k−1 ). By (4.11), x ∈ (D j,k−1 ) j −10 \ (K j ∪ D j,k−1 ). By Lemma 4.4, x ∈ Ret j,k , which is a contradiction. This proves the first claim. The second claim follows from the definition of˜ j,k , in the sense that the new partition has no boundary in Ret j,k , while j,k \ (D j,k−1 ∪ Ret j,k ) is kept intact. The identity in (4.14) is also used to obtain the last equality. Lemma 4.7 For any φ ∈ R j we have: Proof Note that˜ j,k j,k ⊂ (∂ D j,k−1 ∩ Ret j,k ) ∪ Ret j,k , and that˜ j,k ∩ Ret j,k = ∅. Let Ret j,k,l and E j,k,i(l) be as before. For any x ∈˜ j,k ∩ Ret j,k,l ⊂ ∂ D j,k−1 , consider x ∈ ∂(D j,k−1 ) j −10 such that r 0 (x) = x. Note thatx = r 1 (x) ∈ E j,k,i(l) . If r s (x) / ∈ j,k for all s ∈ [0, 1), then r 0 (x) = x ∈ E j,k,i(l) and we have x ∈Ẽ j,k,i(l) , which is a contradiction to x ∈˜ j,k . Thus there exists s ∈ [0, 1) such that r s (x) ∈ j,k . In particular, we see that j,k ∩Ret j,k is in the image of j,k ∩Ret j,k through the normal nearest point projection onto ∂ D j,k−1 . Furthermore, since r s (x) = x + s |x − x| ν U (x), and since φ is ν U -non decreasing in R n+1 \ D j , it holds φ(x) ≤ φ(r s (x)). Given that the normal nearest point projection onto ∂ D j,k−1 is a Lipschitz map with Lipschitz constant = 1, the desired estimate follows from the area formula. Note that, as a corollary of Lemma 4.7, we have that, settingẼ j, (4.20) Step 3: motion by smoothed mean curvature with boundary damping. LetṼ j,k = ∂Ẽ j,k as defined in (3.8), and compute h ε j (·) := h ε j (·,Ṽ j,k ). Also, let η j ∈ A j 3 /4 be the cutoff function defined in Definition 3.15. Observe that j has been chosen so that the conclusions of Lemma 3.16 hold. Define the smooth diffeomorphism f j,k (x) Observe that the induction hypothesis (4.12), together with (4.15) and (4.20), implies that Ṽ j,k (R n+1 ) ≤ M as defined in (4.6). Hence, by Lemma 3.16, and using (3.23) and the definition of t j , we can conclude that |η j h ε t j | ≤ exp(− j 1 /8 ) onK j . By the choice of ε j , we also have that |η j h ε t j | ≤ j −10 everywhere. Set Lemma 4.8 We have namely (4.9) with k in place of k − 1 holds true. Lemma 4.9 We have Proof Suppose, towards a contradiction, that x ∈ f j,k (D j,k−1 ) ∩ (K j \ D j,k ). Since | t j η j h ε j | 1/ j 1/4 for all points,x := f −1 j,k (x) is inK j in particular. Then, |η j (x) h ε j (x) t j | ≤ exp(− j 1 /8 ). This means that |x −x| ≤ exp(− j 1 /8 ). Since x / ∈ D j,k , we need to havex / ∈ D j,k−1 by the definition of these sets. But this is a contradiction since x = f j,k (x) ∈ f j,k (D j,k−1 ) and f j,k is bijective. Lemma 4. 10 We have (4.22) namely (4.10) with k in place of k − 1 holds true. Lemma 4.11 We have namely (4.11) with k in place of k − 1 holds true. Proof If x ∈ j,k \ K j , then there isx ∈˜ j,k such that x = f j,k (x). Ifx / ∈ K j , then x ∈ D j,k−1 ⊂ D j,k by Lemma 4.6, and since |x −x| < j −10 by the properties of the diffeomorphism f j,k our claim holds true. Hence, suppose thatx ∈ K j . Since in this case |x −x| ≤ exp(− j 1 /8 ), ifx ∈ D j,k−1 then evidently x ∈ D j,k , and the proof is complete. On the other hand, we claim that it has to bex ∈ D j,k−1 . Indeed, otherwise we would havẽ x ∈˜ j,k ∩ K j \ D j,k−1 , and thus, again by Lemma 4.6,x ∈ j,k ∩ K j \ D j,k−1 = j,k−1 ∩ K j \ D j,k−1 . But then, by (4.10), there exists y ∈ 0 such that |x − y| . But this contradicts the fact that x / ∈ K j and completes the proof. Conclusion. Together, Lemmas 4.8, 4.10 and 4.11 complete the induction step from k − 1 to k for properties (1), (2), (3). Concerning (4.3), first we observe that, since f j,k is a diffeomorphism, (4.24) We can then use (3.42) with V = ∂Ẽ j,k , M as defined in (4.6), ε = ε j , and t = t j in order to conclude that Combining (4.25) with (4.15) and (4.20), and using that 2 ε We are now in a position to define an approximate flow of open partitions. As anticipated in the introduction, the flow is piecewise constant in time; the parameter t j defined in (4.8) is the epoch length, namely the length of the time intervals in which the flow is set to be constant. Convergence in the sense of measures for all φ ∈ C c (U ) and t ∈ R + . The limits lim s→t+ μ s (φ) and lim s→t− μ s (φ) exist and satisfy 29) and for a.e. t ∈ R + it holds Proof Let 2 Q be the set of all non-negative numbers of the form i 2 j for some i, j ∈ N ∪ {0}. 2 Q is countable and dense in R + . For each fixed T ∈ N, the mass estimate in (4.3) implies that lim sup (4.31) Therefore, by a diagonal argument we can choose a subsequence { j } and a family of Radon Furthermore, with (4.31), we also deduce that Next, let Z := {φ q } q∈N be a countable subset of C 2 c (U ; R + ) which is dense in C c (U ; R + ) with respect to the supremum norm. We claim that the function is monotone non-increasing. To see this, first observe that since φ q has compact support, and since the definition in (4.34) depends linearly on φ q , we can assume without loss of generality that φ q < 1. For convenience, for t ≤ 0, we define g q (t) := μ 0 (φ q ) = ∂E 0 (φ q ). Next, given any j ≥ J (n) as in Proposition 4.2, for every positive function φ such that η j φ ∈ A j we can compute for every t ∈ [0, j], and where h ε j (·) = h ε j (·, ∂E j (t)). By the choice of ε j , and since η j φ ∈ A j , we can use (3.33) to estimate whereas Young's inequality together with (3.34) yields (4.37) Plugging (4.36) and (4.37) into (4.35), we obtain for every t ∈ [0, j] and for every positive function φ such that η j φ ∈ A j . Now, for every T ∈ N, for every φ q ∈ Z with φ q < 1, and for every sufficiently large i ∈ N, choose j * ≥ max{T , J (n)} so that for every j ≥ j * . Using that η j ∈ A j 3 /4 for every j ≥ J (n) and that φ q = 0 outside some compact set K ⊂ U , it is easily seen that the two conditions above can be met by choosing j * sufficiently large, depending on i, φ q C 2 , and K . In particular, j * is so large that φ q ≡ 0 on (∂U ) − s 0 \ D j * , so that φ q + i −1 is trivially ν U -non decreasing in R n+1 \ D j * because it is constant in there. For any fixed t 1 , t 2 ∈ [0, T ] ∩ 2 Q with t 2 > t 1 , choose a larger j * , so that both t 1 and t 2 are integer multiples of 1/2 p j * . Then, both t 2 and t 1 are integer multiples of t j for every j ≥ j * . Hence, for every j ≥ j * we can apply (4.5) repeatedly with φ = φ q + i −1 ∈ A j ∩ R j and (4.38) again with φ = φ q + i −1 so that η j φ ∈ A j in order to deduce (4.39) As we let → ∞, the left-hand side of (4.39) can be bounded from below, using (4.31) and (4.32), as follows: In order to estimate the right-hand side of (4.39), we note that so that if we plug (4.41) in (4.39), use that η j ≤ 1, let → ∞ by means of (4.31), and finally let i → ∞ we conclude for every t 1 , t 2 ∈ [0, T ] ∩ 2 Q with t 2 > t 1 and for any φ q ∈ Z with φ q < 1, thus proving that the function defined in (4.34) is indeed monotone non-increasing on [0, T ]. Since T is arbitrary, the same holds on R + . Define now By the monotonicity of each g q , B is a countable subset of R + , and for every t for every t ∈ R + \ (B ∪ 2 Q ) and φ q ∈ Z . (4.44) Indeed, due to the definition of ∂E j (t), there exists a sequence {t } ∞ =1 ⊂ 2 Q with t > t such that lim →∞ t = t and ∂E j (t) = ∂E j (t ). For any s ∈ 2 Q with s > t, and for all suffciently large so that s > t , we deduce from (4.39) that (4.45) Taking the lim inf →∞ and then the lim i→∞ on both sides of (4.45) we obtain that so that when we let s → t+ the definition of μ t and the fact that An analogous argument provides, at the same time, so that (4.47) and (4.48) together complete the proof of (4.44). Since Z is dense in C c (U ; R + ), (4.44) determines the limit measure uniquely, and the convergence holds for every φ ∈ C c (U ) at every t ∈ R + \ B. On the other hand, since B is countable we can extract a further subsequence of {∂E j (t)} ∞ =1 converging to a Radon measure μ t in U for every t ≥ 0. The continuity of μ t (φ) on R + \ B follows from the definition of B and a density argument. The existence of limits and the inequalities (4.28) can be also deduced from (4.42) in the case φ = φ q , and by density for φ ∈ C c (U ; R + ). This completes the proof of the first part of the statement. The claim in (4.29) follows from (4.4). Finally, (4.29) implies that for each T > 0 where in the last identity we have used that given the definition of κ and the fact that ε j satisfies (3.31). The proof is now complete. Brakke's inequality, rectifiability and integrality of the limit In the next proposition we deduce further information concerning the family {μ t } t≥0 of measures in U introduced in Proposition 4.13. (1) For a.e. t ∈ R + the measure μ t is integral, namely there exists an integral varifold then ∂E j (t) converges to V t ∈ IV n (U ) as varifolds in U as → ∞, namely for any φ ∈ C c (U ; R + ). (2) lim →∞ j 4 ε j = 0 and j ≤ ε Here, c 0 is a constant depending only on n. Furthermore, V G n (U ) ∈ RV n (U ). Proof The existence of a subsequence {∂E j } ∞ =1 converging in the sense of varifolds to V ∈ V n (R n+1 ) follows from the compactness theorem for Radon measures using assumption (3). The limit varifold V satisfies spt V ⊂ clos U because of assumption (1). Indeed, since spt ∂E j ⊂ clos U j by definition of open partition, if x ∈ R n+1 \ clos U then (1) implies that there is a radius r > 0 such that ∂E j (U r (x)) = 0 for all sufficiently large , which in turn gives V (U r (x)) = 0. Furthermore, the validity of (2), (3), and (4) allows us to apply Proposition 3.20 in order to deduce that δV U is a Radon measure. Hence, the rectifiability of the limit varifold in U is a consequence of Allard's rectifiability theorem [1, Theorem 5.5(1)] once we prove (5.4). In turn, the latter can be obtained by repeating verbatim the arguments in [20,Theorem 7.3]. Indeed, the proof in there is local, and for a given x 0 ∈ U it can be reproduced by replacing B 1 (x 0 ) in [20,Theorem 7.3] by B ρ (x 0 ) for sufficiently small ρ > 0 and large so that B ρ (x 0 ) ⊂ D j and η j = 1 on B ρ (x 0 ). Theorem 5.3 (Integrality Theorem). Under the same assumptions of Theorem 5.2, if the stronger Just like Theorem 5.2, the claim is local in nature and the proof is the same as [20,Theorem 8.6]. Proof of Proposition 5.1 First, observe that by (4.29) and Fatou's lemma we have lim inf for a.e. t ∈ R + . Furthermore, from (4.3) and the definition of ∂E j (t) we also have that for Let t ∈ R + be such that ( , ∂E j (t) converges, as → ∞, to a varifold V t ∈ V n (R n+1 ) with spt V t ⊂ clos U and such that V t G n (U ) ∈ IV n (U ). Since the convergence is in the sense of varifolds, the weights converge as Radon measures, and thus lim →∞ ∂E j (t) = V t : (4.27) then readily implies that V t U = μ t as Radon measures on U , thus proving (1). Concerning the statement in (2), let { j } ∞ =1 be a subsequence along which (5.1) holds. Then, any converging further subsequence must converge to a varifold satisfying the conclusion of Theorem 5.3. A priori, two distinct subsequences may converge to different limits. On the other hand, each subsequential limit V t is a rectifiable varifold when restricted to the open set U , and furthermore it satisfies V t U = μ t . Since rectifiable varifolds are uniquely determined by their weight, we deduce that the limit in U is independent of the particular subsequence, and thus (5.1) forces the whole sequence ∂E j (t) to converge to a uniquely determined integral varifold V t in U . Finally, (3) follows from Proposition 3.20. A byproduct of the proof of Proposition 5.1 is the existence of a (uniquely defined) integral varifold V t ∈ IV n (U ) with weight V t = μ t for every t ∈ R + \ Z , where L 1 (Z ) = 0. Such a varifold V t is the limit on U of any sequence ∂E j (t) along which (5.1) holds true. We can now extend the definition of V t to t ∈ Z so to have a one-parameter family {V t } t∈R + ⊂ V n (U ) of varifolds satisfying V t = μ t for every t ∈ R + . Such an extension can be defined in an arbitrary fashion: for instance, if t ∈ Z then we can set V t (ϕ) := ϕ(x, S) dμ t (x) for every ϕ ∈ C c (G n (U )), where S is any constant plane in G(n + 1, n). In the next theorem, we show that the family of varifolds {V t } is indeed a Brakke flow in U . The boundary condition and the initial condition will be discussed in the following section. Theorem 5.4 (Brakke's inequality). For every T > 0 we have Furthermore, for any φ ∈ C 1 c (U × R + ; R + ) and 0 ≤ t 1 < t 2 < ∞ we have: Proof In order to prove (5.7), we use (4.5) with φ = 1 which belongs to A j ∩ R j for all j. Assume T ∈ 2 Q first. By summing over the index k and for all sufficiently large j, we have By (3.33) and (5.3) as well as V T (U ) ≤ lim inf →∞ ∂E j (T ) (U ), we obtain (5.7). For T / ∈ 2 Q , use (4.28) to deduce the same inequality. We now focus on proving the validity of Brakke's inequality (5.8). Step 1. We will first assume that φ is independent of t, and then extend the proof to the more general case. By an elementary density argument, we can assume that φ ∈ C ∞ c (U ; R + ). Moreover, since the support of φ is compact and (5.8) depends linearly on φ, we can also normalize φ in such a way that φ < 1 everywhere. Then, for all sufficiently large i ∈ N, alsô φ := φ + i −1 < 1 everywhere. Arguing as in the proof of Proposition 4.13, we can choose m ∈ N so that m ≥ J (n) (see Lemma 3.16) and furthermore for all j ≥ m. Next, fix 0 ≤ t 1 < t 2 < ∞, and let be such that j ≥ m and j ≥ t 2 , so that ∂E j (t) is certainly well defined for t ∈ [t 1 , t 2 ]. By the condition (i) above, we can apply (4.5) withφ and deduce for every t = t j , 2 t j , . . . , j 2 p j t j . Since t j → 0 as → ∞, we can assume without loss of generality that t j < t 2 − t 1 , so that there exist k 1 , k 2 ∈ N with k 1 < k 2 such that t 1 ∈ (k 1 − 2) t j , (k 1 − 1) t j and t 2 ∈ (k 2 − 1) t j , k 2 t j . If we sum (5.9) on t = k t j for k ∈ [k 1 , (5.10) we can estimate the left-hand side of (5.10) from below as (5.11) so that when we let → ∞ we conclude where we have used (4.27) together with Proposition 5.1(1). Next, we estimate the right-hand side of (5.10) from above. Setting ∂E j = ∂E j (t) and h ε j = h ε j (·, ∂E j ), we proceed as in (4.35) writing (5.13) where we have used that ∇φ = ∇φ. Since η j φ ∈ A j , we can apply (3.33) in order to obtain that where we have set for simplicity Concerning the second summand in (5.13), we use the Cauchy-Schwarz inequality to estimate where c depends only on φ C 2 , and where we have used (3.34). Using (5.14), (5.16) and (4.3), we can then conclude that where c depends only on φ C 2 and ∂E 0 (R n+1 ). Using (5.17) together with the definition of ∂E j (t) and Fatou's lemma, one can readily show that, when we take the lim sup as → ∞, the right-hand side of (5.10) can be bounded by Now, fix t ∈ [t 1 , t 2 ] such that lim inf →∞ b j < ∞ (which holds for a.e. t), and let { j } ⊂ { j } be a subsequence which realizes the lim sup, namely with ) . (5.19) By the identity in (5.13), we also have that along the same subsequence (5.20) where once again ∂E j = ∂E j (t) and h ε j = h ε j (·, ∂E j ). Using (5.14) and (5.16), we see that the right-hand side of (5.20) can be bounded from above by lim inf →∞ 2 b j +c, whereas the left-hand side can be bounded from below by lim sup →∞ 1 2 b j − c, where c depends on φ C 2 and ∂E 0 (R n+1 ). As a consequence, along any subsequence { j } satisfying (5.19) one has that lim sup where ∂E j = ∂E j (t). Let us denote the right-hand side of (5.21) as B(t). Sinceφ ≥ i −1 , and thanks to (5.21), if B(t) < ∞ then the assumption (5.1) of Proposition 5.1 is satisfied along j : hence, the whole sequence {∂E j (t)} ∞ =1 converges to V t ∈ IV n (U ) as varifolds in U . Furthermore, using one more time thatφ ≥ i −1 we deduce that lim sup Using (5.19), (5.13), (5.14),φ > φ, and Proposition 5. where we have also used that, as → ∞, η j = 1 on {∇φ = 0} ⊂⊂ U . Now, recall that V t ∈ IV n (U ). Therefore, there is an H n -rectifiable set M t ⊂ U such that for any ε > 0 there are a vector field g ∈ C ∞ c (U ; R n+1 ) and a positive integer m such that g ∈ B m and In order to estimate the lim sup in the right-hand side of (5.23), we can now compute, for ∂E j = ∂E j (t): We proceed estimating each term of (5.26). Using that η j = 1 on {∇φ = 0} for all sufficiently large, the Cauchy-Schwarz inequality gives that for all sufficiently large. Since (x, S) → |S ⊥ (∇φ(x)) − g(x)| 2 ∈ C c (G n (U )), we have that (5.25) ≤ ε 2 . Analogously, since η j = 1 on {g = 0} for all sufficiently large, we have that by (3.35) and (5.22). Next, by varifold convergence of ∂E j to V t on U , given that g has compact support in U , we also have Finally, letting ψ be any function in C c (U ; R + ) such that ψ = 1 on {g = 0} ∪ {∇φ = 0} and 0 ≤ ψ ≤ 1, the Cauchy-Schwarz inequality allows us to estimate where we have also used (2.6). We can now combine (5.10), (5.12), (5.18), (5.23), and (5.33) to deduce that We use the Cauchy-Schwarz inequality one more time, and combine it with the definition of B(t) as the right-hand side of (5.21) and with Fatou's lemma to obtain the bound which is finite (depending on t 2 ) by (4.29) (recall thatφ ≤ 1 everywhere). Brakke's inequality (5.8) for a test function φ which does not depend on t is then deduced from (5.34) after letting ε ↓ 0 and then i ↑ ∞. Step 2. We consider now the general case of a time dependent test function φ ∈ C 1 c (U × R + ; R + ). We can once again assume that φ is smooth, and then conclude by a density argument. The proof follows the same strategy of Step 1. We defineφ analogously, and then we apply (4.5) with φ =φ(·, t). In place of (5.9), we then obtain a formula with one extra term, namely Similarly, the inequality in (5.10) needs to be replaced with an analogous one containing, in the right-hand side, also the term (5.37) Using the regularity of φ and the estimates in (4.3) and (4.4), we may deduce that where the last identity is a consequence of (4.27), Proposition 5.1(1), and Lebesgue's dominated convergence theorem. The remaining part of the argument stays the same, modulo the following variation. The identity in (5.18) remains true ifφ is replaced by the piecewise constant functionφ j defined bŷ The error one makes in order to putφ back into (5.18) in place ofφ j is then given by the product of t j times some negative powers of ε j ; nonetheless, this error converges to 0 uniformly as ↑ ∞ by the choice of t j , see (4.8). This allows us to conclude the proof of (5.8) precisely as in the case of a time-independent φ whenever φ ∈ C ∞ c (U × R + ; R + ), and in turn, by approximation, also when φ ∈ C 1 c (U × R + ; R + ). Vanishing of measure outside the convex hull of initial data First, we prove that the limit measures V t vanish uniformly in time near ∂U \ ∂ 0 . This is a preliminary result, and using the Brakke's inquality, we eventually prove that they actually vanish outside the convex hull of 0 ∪ ∂ 0 in Proposition 6.4. Proposition 6.1 Forx ∈ ∂U \ ∂ 0 , suppose that an affine hyperplane A ⊂ R n+1 withx / ∈ A has the following property. Let A + and A − be defined as the open half-spaces separated by A, i.e., R n+1 is a disjoint union of A + , A and A − , withx ∈ A + . Define d A (x) := dist (x, A − ), and suppose that Then for any compact set C ⊂ A + , we have (6.1) Remark 6.2 Due to the definition of ∂ 0 and the strict convexity of U , note that there exists such an affine hyperplane A for any givenx ∈ ∂U \ ∂ 0 . For example, we may choose a hyperplane A which is parallel to the tangent space of ∂U atx and which passes througĥ By the strict convexity of U and the C 1 regularity of ν U , for all sufficiently small c > 0, one can show that such A satisfies the above (1) and (2). Remark 6.3 In the following proof, we adapted a computation from [17, p.60]. There, the object is the Brakke flow, but the basic idea here is that a similar computation can be carried out for the approximate MCF with suitable error estimates. Proof We may assume after a suitable change of coordinates that A = {x n+1 = 0} and A + = {x n+1 > 0}. With this, we have clos 0 ⊂ {x n+1 < 0} and d A (x) = max{x n+1 , 0} is ν U -non decreasing in {x n+1 > 0}. Let s > 0 be arbitrary, and define for some β ≥ 3 to be fixed later. Then φ ∈ C 2 (R n+1 ; R + ), and letting {e 1 , . . . , e n+1 } denote the standard basis of R n+1 , we have With s > 0 fixed, we choose sufficiently large j so that φ ∈ A j . Actually, the function φ as defined in (6.2) is unbounded. Nonetheless, since we know that spt ∂E j (t) ⊂ (U ) 1/(4 j 1 /4 ) , we may modify φ suitably away from U by multiplying it by a small number and truncating it, so that φ ≤ 1. We assume that we have done this modification if necessary. We also choose j so large that η j = 1 on {x n+1 ≥ 0}. This is possible due to Lemma 3.16 (1). Additionally, since d A is ν U -non decreasing in A + , and since φ is constant in R n+1 \ A + , we have φ ∈ R j . Thus, by (4.5), we have for ∂E j,k =: V and ∂E j,k−1 =:V with k ∈ {1, . . . , j2 p j } For all sufficiently large j, we also have η j φ ∈ A j , thus we may proceed as in (4.35) and estimate (6.5) Here we have used that η j = 1 when ∇φ = 0. In the present proof, we omit the domains of integration, which are either R n+1 or G n (R n+1 ) unless specified otherwise. We use (3.34) to proceed as: We prove that the last term gives a good negative contribution. We have Here we replace ∇φ(x) by ∇φ(y) and estimate the error To estimate (6.7), since η j φ ∈ A j , (3.1) and (3.3) imply By separating the integration to B √ ε j (y) and y)). (6.8) Let us denote c ε j := c(n)ε −n−1 j j exp( j −(2ε j ) −1 ) and note that it is exponentially small (say, for all sufficiently large j. By (6.3), we have (6.14) where in the last identity we have used that S is the matrix representing an orthogonal projection operator, so that S is symmetric and S 2 = S, whence In particular, the quantity in (6.14) can be made negative if β = 4, for example. This shows that (6.13) is less than 2ε 1 /8 j . By summing over k = 1, . . . , j 1 /2 /( t j ) and using that We use this in (6.15), and we let first j → ∞ and then s → 0 in order to obtain (6.1). Proposition 6.4 For all t Proof Suppose that A ⊂ R n+1 is a hyperplane such that, using the notation in the statement of Proposition 6.1, 0 ∪ ∂ 0 ⊂ A − . If d A is ν U -non decreasing in A + , then (6.1) proves immediately that V t (A + ) = 0 for all t ≥ 0. Thus, suppose that d A does not satisfy this property. Still, due to Proposition 6.1, for each x ∈ ∂U \ ∂ 0 , there exists a neighborhood B r (x) such that V t (B r (x) ∩ U ) = 0 for all t ≥ 0. In particular, there exists some r 0 > 0 such that We next use φ = ψ d 4 A in (5.8) with t 1 = 0 and an arbitrary t 2 = t > 0 to obtain (6.17) By (6.16), φ = d 4 A on the support of V s . Since S · ∇ 2 d 4 A ≥ 0 for any S ∈ G(n + 1, n) (see (6.14)), the right-hand side of (6.17) is ≤ 0. Since V 0 (φ) = 0, we have V t (A + ) = 0 for all t > 0. This proves the claim. In the following, we list results from [20,Section 10]. The results are local in nature, thus even if we are concerned with a Brakke flow in U instead of R n+1 , the proofs are the same. We recall the following (cf. Theorem 2.3(11)): Definition 6.5 Define a Radon measure μ on U × R + by setting dμ := d V t dt, namely for every φ ∈ C c (U × R + ) . (6.18) Lemma 6. 6 We have the following properties for μ and {V t } t∈R + . The next Lemma (see [20,Lemma 10.10 and 10.11]) is used to prove the continuity of the labeling of partitions. denote the open partitions for each j and t ∈ R + , i.e., Then for all t ∈ (t − r 2 , t + r 2 ], we have Then for all t ∈ (0, r 2 ], we have The following is from [2, 3.7]. Lemma 6.8 Suppose that V t (U r (x)) = 0 for some t ∈ R + and U r (x) ⊂⊂ U . Then, for every t ∈ t, t + r 2 2n it holds V t (U √ r 2 −2n (t −t) (x)) = 0. 1 , for each t and i the volumes L n+1 (E j ,i (t)) are uniformly bounded in . Furthermore, by the mass estimate in (4.31) we also have that ∇χ E j ,i (t) (R n+1 ) are uniformly bounded. Hence, we can use the compactness theorem for sets of finite perimeter in order to select a (not relabeled) subsequence with the property that, for each fixed i ∈ {1, . . . , N }, Proof of Theorem 2.3 Let where E i (t) is a set of locally finite perimeter in R n+1 . Moreover, using that E j ,i (t) ⊂ (U ) 1/(4 j 1 /4 ) (see Proposition 4.2 and (4.7)) we see that L n+1 (E i (t) \ U ) = 0. Since sets of finite perimeter are defined up to measure zero sets, we can then assume without loss of generality that E i (t) ⊂ U . Hence, since H n (∂U ) < ∞, E i (t) is in fact a set of finite perimeter in R n+1 . Next, consider the complement of spt μ ∪ ( 0 × {0}) in U × R + , which is relatively open in U × R + , and let S be one of its connected components. For any point (x, t) ∈ S there exists r > 0 such that either We first consider the case t = 0. Since B 2 r (x) lies in the complement of 0 , there exists i(x, 0) ∈ {1, . . . , N } such that B 2 r (x) ⊂ E 0,i(x,0) , and thus B 2 r (x) ⊂ E j ,i(x,0) (0) for all ∈ N. Since also μ(B 2 r (x) × 0, r 2 ) = 0, we can apply Lemma 6.7 (2) and conclude that Similarly, if t > 0, since μ(B 2 r (x) × t − r 2 , t + r 2 ) = 0, we can apply Lemma 6.7(1) to conclude that there is a unique i(x, t) ∈ {1, . . . , N } such that Now, observe that if S is any connected component of the complement of spt μ∪( 0 ×{0}) in U × R + , then by (6.20) and (6.21), and since S is connected, for any two points (x, t) and (y, s) in S it has to be i(x, t) = i(y, s). For every i ∈ {1, . . . , N }, we can then let S(i) denote the union of all connected components S such that i(x, t) = i for every (x, t) ∈ S. It is clear that S(i) are open sets, and that E 0,i = {x ∈ U : (x, 0) ∈ S(i)} (notice that if x ∈ E 0,i then (x, 0) / ∈ spt μ as a consequence of Lemma 6.8), so that each S(i) is not empty. Furthermore, . For every t ∈ R + , we can thus define By examining the definition, one obtains (t) = {x ∈ U : (x, t) ∈ spt μ} for all t > 0. Combined with Lemma 6.6(1), we have (11). By Lemma 6.6(2), we have (3), and this also proves that (t) has empty interior, which shows (4). The claims (1) and (2) hold true by construction. (5) is a consequence of Proposition 6.4 and the definition of μ being the product measure. (6) is similar: if x ∈ U \ conv( 0 ∪ ∂ 0 ) then the half-line t ∈ R + → γ x (t) := (x, t) ∈ U × R + must be contained in the same connected component of (U × R + ) \ (spt μ ∪ ( 0 × {0})), for otherwise there would be t > 0 such that (x, t) ∈ spt μ, thus contradicting (5). For (7), by the strict convexity of U and (5), we have ∂ (t) ⊂ ∂ 0 for all t > 0. Later in Proposition 6.9, we prove (clos (spt V t )) \ U = ∂ 0 and ∂ 0 ⊂ ∂ (t) follows from this and (11). Coming to (8), we use (6.21) together with the conclusions in Proposition 4.2(1) to see that In particular, the lower semi-continuity of perimeter allows us to deduce that for any φ ∈ C c (U ; R + ) thus proving ∇χ E i (t) ≤ V t of (8). Using the cluster structure of each ∂E j (t) (see e.g. [26,Proposition 29.4]), we have in fact that for every φ as above , which shows the other statement N i=1 ∇χ E i (t) ≤ 2 V t in (8). Since the claim of (9) is interior in nature, the proof is identical to the case without boundary as in [20,Theorem 3.5(6)]. For the proof of (10), fort ≥ 0, we prove that χ E i (t) → χ E i (t) in L 1 (U ) as t →t for each i = 1, . . . , N . Since ∇χ E i (t) (U ) ≤ V t (U ) ≤ H n ( 0 ), for any t k →t, there exists a subsequence (denoted by the same index) andẼ i ⊂ U such that χ E i (t k ) → χẼ i in L 1 (U ) and L n+1 a.e. by the compactness theorem for sets of finite perimeter. We also have L n+1 (Ẽ i ∩Ẽ j ) = 0 for i = j and L n+1 (U \ ∪ N i=1Ẽ i ) = 0. For a contradiction, assume that L n+1 (E i (t) \Ẽ i ) > 0 for some i. Then, there must be U r (x) ⊂⊂ E i (t) such that L n+1 (U r (x) \Ẽ i ) > 0. We then use Theorem 2.3(9) with g(t) = L n+1 (E i (t) ∩ U r (x)), which gives lim t→t g(t) = g(t) = L n+1 (E i (t) ∩U r (x)) = L n+1 (U r (x)). On the other hand, This proves (9), and finishes the proof of (1)-(11) except for (7), which is independent and is proved once we prove Proposition 6.9. Conversely, let x ∈ ∂ 0 , and suppose for a contradiction that x / ∈ clos (spt V t ), so that there is a radius r > 0 with the property that B r (x) ∩ spt V t = ∅. Then, Theorem 2.3 (8) If t = 0, since E i (0) = E 0,i for every i = 1, . . . , N , the conclusion in (6.23) is evidently incompatible with (A4), thus providing the desired contradiction. We can then assume t > 0. By (A4), there are at least two indices i = i ∈ {1, . . . , N } and sequences of balls such that x j , x j ∈ ∂U , lim j→∞ x j = lim j→∞ x j = x and B r j (x j ) ∩ U ⊂ E 0,i whereas B r j (x j ) ∩ U ⊂ E 0,i . Let z denote any of the points x j or x j , and observe that the above condition guarantees that z ∈ ∂U \ ∂ 0 . In turn, by arguing as in Remark 6.2 we deduce that there is a neighborhood B ρ (z)∩U such that V t (B ρ (z)∩U ) = 0 for all t ≥ 0, and thus also ∇χ E l (t) (B ρ (z) ∩ U ) = 0 for every t ≥ 0 and for every l ∈ {1, . . . , N }. Since B ρ (z) ∩ U is connected this implies that B ρ (z) ∩ U ⊂ E l (t) for some l. Applying this argument with z = x j and z = x j we then find radii ρ j and ρ j such that, for all t ≥ 0. Since x j → x and x j → x this conclusion is again incompatible with (6.23), thus completing the proof. Proposition 6. 10 We have for each φ ∈ C c (U ; R + ) . where we also used Theorem 2.3 (8) and (10). This proves the first inequality. The second equality and the third inequality follow from (4.28), μ t = V t and V 0 = H n 0 . The proof of Theorem 2.2 is now complete: {V t } t≥0 is a Brakke flow with fixed boundary ∂ 0 due to Proposition 5.1(1), Theorem 5.4 and Proposition 6.9. Proposition 6.10 proves the claim on the continuity of measure at t = 0. where E i ⊂ U are sets of finite perimeter. Since, by Theorem 2.3 (3) The validity of Theorem 2.3(8) implies conclusion (1), namely that in the sense of Radon measures in U . As a consequence of (7.6), we have that spt ∇χ E i ⊂ spt V for every i = 1, . . . , N . Since V is a stationary integral varifold, the monotonicity formula implies that spt V is H n -rectifiable, and V = var(spt V , θ) for some upper semi-continuous θ : U → R + with θ(x) ≥ 1 at each x ∈ spt V . In particular, setting := spt V , we have where the last inequality is a consequence of (5.7) and the lower semicontinuity of the weight with respect to varifold convergence. → χ E i now holds pointwise on U \conv( 0 ∪∂ 0 ). We have not excluded the possibility that H n ( ) = 0. But this should imply V = 0 by (7.7), and ∇χ E i = 0 for every i ∈ {1, . . . , N } by (7.6), which is a contradiction to (2). Thus we have necessarily H n ( ) > 0 and this completes the proof of (3). In order to conclude the proof, we are just left with the boundary condition (4), namely Towards the first inclusion, suppose that x ∈ (clos (spt V )) \ U , and let {x h } ∞ h=1 be a sequence with x h ∈ spt V such that x h → x as h → ∞. If x / ∈ ∂ 0 then Proposition 6.1 implies that there exists r > 0 such that lim sup By the lower semi-continuity of the weight with respect to varifold convergence, we deduce then that V (U ∩ U r (x)) = 0. For h large enough so that |x − x h | < r we then have V (U ∩U r −|x−x h | (x h )) = 0, thus contradicting that x h ∈ spt V . For the second inclusion, let x ∈ ∂ 0 , and suppose towards a contradiction that x / ∈ clos(spt V ) \ U . Then, there exists a radius r > 0 such that U r (x) ∩ spt V = ∅. In particular, ∇χ E i (U ∩ U r (x)) = 0 for every i ∈ {1, . . . , N }. Since U is convex, U ∩ U r (x) is connected, and thus every χ E i is either identically 0 or 1 in U r (x) ∩ U , namely If z denotes any of the points x j or x j , Proposition 6.1 and Remark 6.2 ensure the existence of ρ such that V t (B ρ (z) ∩ U ) = 0 for all t ≥ 0. Again by lower semicontinuity of the weight with respect to varifold convergence, Since both x j → x and x j → x, this conclusion is incompatible with (7.9). This completes the proof. The stationary varifold V from Corollary 2.4 is a generalized minimal surface in U , and for this reason it can be thought of as a solution to Plateau's problem in U with the prescribed boundary ∂ 0 . Brakke flow provides, therefore, an interesting alternative approach to the existence theory for Plateau's problem compared to more classical methods based on mass (or area) minimization. Another novelty of this approach is that the structure of partitions allows to prescribe the boundary datum in the purely topological sense, by means of the constraint (clos (spt V )) \ U = ∂ 0 . This adds to the several other possible interpretations of the spanning conditions that have been proposed in the literature: among them, let us mention the homological boundary conditions in Federer and Fleming's theory of integral currents [12] or of integral currents mod( p) [11] (see also Brakke's covering space model for soap films [3]); the sliding boundary conditions in David's sliding minimizers [5,6]; and the homotopic spanning condition of Harrison [13], Harrison-Pugh [14] and De Lellis-Ghiraldin-Maggi [7]. Concerning the latter, we can actually show that, under a suitable extra assumption on the initial partition E 0 , a homotopic spanning condition is satisfied at all times along the flow. Before stating and proving this result, which is Proposition 7.4 below, let us first record the definition of homotopic spanning condition after [7]. Definition 7.1 (see [7,Definition 3]). Let n ≥ 2, and let be a closed subset of R n+1 . Consider the family C := γ : S 1 → R n+1 \ : γ is a smooth embedding of S 1 into R n+1 \ . (7.10) A subfamily C ⊂ C is said to be homotopically closed if γ ∈ C implies thatγ ∈ C for everỹ γ ∈ γ , where γ is the equivalence class of γ modulo homotopies in R n+1 \ . Given a homotopically closed C ⊂ C , a relatively closed subset K ⊂ R n+1 \ is C-spanning if 2 K ∩ γ = ∅ for every γ ∈ C . (7.11) Remark 7.2 If C ⊂ C contains a homotopically trivial curve, then any C-spanning set K will necessarily have non-empty interior (and therefore infinite H n measure). For this reason, we are only interested in subfamilies C with γ = 0 for every γ ∈ C. Definition 7. 3 We will say that a relatively closed subset K ⊂ R n+1 \ strongly homotopically spans if it C-spans for every homotopically closed family C ⊂ C which does not contain any homotopically trivial curve. Namely, if K ∩ γ = ∅ for every γ ∈ C such that γ = 0 in π 1 (R n+1 \ ). We can prove the following proposition, whose proof is a suitable adaptation of the argument in [7, Lemma 10]. Then, the set (t) strongly homotopically spans ∂ 0 for every t ∈ [0, ∞]. Proof Let γ : S 1 → R n+1 \ ∂ 0 be a smooth embedding that is not homotopically trivial in R n+1 \ ∂ 0 . The goal is to prove that, for every t ∈ [0, ∞], (t) ∩ γ = ∅. First observe that it cannot be γ ⊂ U , for otherwise γ would be homotopically trivial. For the same reason, since the ambient dimension is n + 1 ≥ 3 also γ ⊂ R n+1 \ clos U is incompatible with the properties of γ . Hence, we conclude that γ must necessarily intersect ∂U . We first prove the result under the additional assumption that γ and ∂U intersect transversally. We can then find finitely many closed arcs I h = [a h , b h ] ⊂ S 1 with the property that γ ∩U = h γ ((a h , b h ) Furthermore, this can be achieved under the additional condition that τ h (I h ) ∩ τ h (I h ) = ∅ for every h = h . We can then define a piecewise smooth embeddingγ of S 1 into R n+1 \ ∂ 0 such thatγ | I h := τ h | I h for every h, andγ = γ on the open set S 1 \ h I h . We have γ = γ in π 1 (R n+1 \ ∂ 0 ). We can then construct a smooth embeddingγ : S 1 → R n+1 \ ∂ 0 such that γ = γ in π 1 (R n+1 \ ∂ 0 ), and withγ ⊂ R n+1 \ ∂U . Since n + 1 ≥ 3 this contradicts the assumption that γ = 0 and completes the proof if γ and ∂U intersect transversally. Finally, we remove the transversality assumption. Let δ = δ(∂U ) > 0 be such that the tubular neighborhood (∂U ) 2δ has a well-defined smooth nearest point projection , and consider, for |s| < δ, the open sets U s having boundary ∂U s = {x − s ν U (x) : x ∈ ∂U }, where ν U is the exterior normal unit vector field to ∂U . Since γ is smooth, by Sard's theorem γ intersects ∂U s transversally for a.e. |s| < δ. Fix such an s ∈ (0, δ), and let s : R n+1 → R n+1 be the smooth diffeomorphism of R n+1 defined by s (x) := x + ϕ s (ρ U (x)) ν U ( (x)) , (7.12) where In particular, s maps ∂U s diffeomorphically onto ∂U , and furthermore s → id uniformly on R n+1 as s → 0 + . (7.13) Since γ intersects ∂U s transversally, the curve s • γ intersects ∂U transversally. Furthermore, since γ and ∂ 0 are two compact sets with empty intersection, (7.13) implies that if we choose s sufficiently small then also ( s • γ ) ∩ ∂ 0 = ∅. Since s • γ = γ = 0 in π 1 (R n+1 \ ∂ 0 ), the first part of the proof guarantees that for every t ∈ [0, ∞] we have (t)∩( s •γ ) = ∅. For every t we then have points z s (t) ∈ (t)∩ s •γ . Along a sequence s h → 0+, then, by compactness, (7.13), and the fact that each set (t) is closed, we have that the points z s h (t) converge to a point z 0 (t) ∈ (t) ∩ γ . The proof is now complete. Example 7.5 Suppose that U = U 1 (0) ⊂ R 3 , and ∂ 0 is the union of two parallel circles contained in S 2 = ∂U at distance 2h from one another, with h ∈ (0, 1). Then, ∂U \ ∂ 0 consists of the union of three connected components S u ∪ S l ∪ S d (here u, l, d stand for up, lateral, and down, respectively). If h is suitably small, then there are two smooth minimal catenoidal surfaces C 1 ⊂ U and C 2 ⊂ U , one stable and the other unstable, satisfying clos(C j ) \ U = ∂ 0 . Nonetheless if the initial partition {E 0,i } i satisfies ( ), then, as a consequence of Proposition 7.4, both C 1 and C 2 are not admissible limits of Brakke flow as in Corollary 2.4, since there exists a smooth and homotopically non-trivial embedding γ : S 1 → R 3 \ ∂ 0 having empty intersection with each of them. For instances, if N = 3 and the initial partition is such that S u ⊂ clos E 0,1 , S l ⊂ clos E 0,2 , and S d ⊂ clos E 0,3 , then the corresponding Brakke flows will converge, instead, to a singular minimal surface in U consisting of the union =C 1 ∪C 2 ∪ D, whereC j are pieces of catenoids, and D is a disc contained in the plane {z = 0}, which join together forming 120 • angles along the "free boundary" circle = ∂ D; see Fig. 1. We will conclude the section with three remarks containing some interesting possible future research directions. Remark 7.6 First, we stress that the requirements on ∂ 0 are rather flexible, above all in terms of regularity. It would be interesting to characterize, for a given strictly convex domain U ⊂ R n+1 , all its admissible boundaries, namely all subsets ⊂ ∂U such that there are N ≥ 2 and E 0 , 0 as in Assumption 1.1 such that = ∂ 0 . A first observation is that admissible boundaries do not need to be countably (n −1)-rectifiable, or to have finite (n −1)dimensional Hausdorff measure: for example, it is not difficult to construct an admissible ⊂ ∂U 1 (0) in R 2 with H 1 ( ) > 0, essentially a "fat" Cantor set in S 1 . The assumption (A4) requires any admissible boundary to have empty interior. It is unclear whether this condition is also sufficient for a subset to be admissible. Remark 7.7 Let us explicitly observe that, even in the case when 0 (or more precisely V 0 := var( 0 , 1)) is stationary, it is false in general that V t = V 0 for t > 0. In other words, the approximation scheme which produces the Brakke flow V t may move the initial datum V 0 even when the latter is stationary. A simple example is a set consisting of two line segments with a crossing, for which multiple non-trivial solutions (depending on the choice of the initial partition) are possible; see Fig. 2. In fact, one can prove that such one-dimensional configuration cannot stay time-independent with respect to the Brakke flow constructed in the present paper: [21, Theorem 2.2], indeed, shows that one-dimensional Brakke flows obtained in the present paper and in [20] necessarily satisfy a specific angle condition at junctions for a.e. time, with the only admissible angles being 0, 60, or 120 degrees. Thus, depending on the initial labeling of domains, one of the two evolutions depicted in Fig. 2 has to occur instantly. If 0 is a smooth minimal surface with smooth boundary ∂ 0 , the uniqueness theorem for classical MCF should allow t ≡ 0 as the unique solution, even if the latter is unstable (i.e. the second variation is negative for some direction). In other words, in the smooth case we expect that there is no other Brakke flow starting from 0 other than the time-independent solution (notice, in passing, that both the area-reducing Lipschitz deformation step and the motion by smoothed mean curvature step in our time-discrete approximation of Brakke flow trivialize in this case -at least locally -, because smooth minimal surfaces are already locally area minimizing at suitably small scales around each point). On the other hand, in [36] we show that time-dependent solutions may arise even from the existence, on 0 , of singular points at which V 0 has a flat tangent cone, that is a tangent cone which is a plane T with multiplicity Q ≥ 2. It would be interesting to characterize the regularity properties of those stationary 0 with E 0,1 , . . . , E 0,N satisfying Assumption 1.1 and H n ( 0 \ ∪ N i=1 ∂ * E 0,i ) = 0 which do not allow any non-trivial Brakke flows (dynamically stable stationary varifolds, in the terminology introduced in [36]). We expect that such a 0 should have some local measure minimizing properties. Remark 7.8 Let and be as in Corollary 2.4 obtained as t k → ∞ along a Brakke flow. Since V is integral and stationary, V = var( , θ ) for some H n -measurable function θ : → N. One can check that and {E i } N i=1 (after removing empty E i 's if necessary) again satisfy the Assumption 1.1, thus we may apply Theorem 2.2 and obtain another Brakke flow with the same fixed boundary. Note that if we have V ({x : θ(x) ≥ 2}) > 0, then var( , 1) may not be stationary, and the Brakke flow starting from non-stationary var( , 1) is genuinely time-dependent. We then obtain another stationary varifold as t → ∞ by Corollary 2.4. It is likely that, after a finite number of iterations, this process produces a unit density stationary varifold which does not move anymore. The other possibility is also interesting, in that we would have infinitely many different integral stationary varifolds with the same boundary condition, each having strictly smaller H n measure than the previous one.
2022-11-29T14:16:59.491Z
2021-01-24T00:00:00.000
{ "year": 2021, "sha1": "57adb7a56717c24dee4648295f252197093f687c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00526-020-01909-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "57adb7a56717c24dee4648295f252197093f687c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
234850251
pes2o/s2orc
v3-fos-license
Interannual Variability of GPS Heights and Environmental Parameters over Europe and the Mediterranean Area : Vertical deformations of the Earth’s surface result from a host of geophysical and geological processes. Identification and assessment of the induced signals is key to addressing outstanding scientific questions, such as those related to the role played by the changing climate on height variations. This study, focused on the European and Mediterranean area, analyzed the GPS height time series of 114 well-distributed stations with the aim of identifying spatially coherent signals likely related to variations of environmental parameters, such as atmospheric surface pressure (SP) and terrestrial water storage (TWS). Linear trends and seasonality were removed from all the time series before applying the principal component analysis (PCA) to identify the main patterns of the space/time interannual variability. Coherent height variations on timescales of about 5 and 10 years were identified by the first and second mode, respectively. They were explained by invoking loading of the crust. Single-value decomposition (SVD) was used to study the coupled interannual space/time variability between the variable pairs GPS height–SP and GPS height–TWS. A decadal timescale was identified that related height and TWS variations. Features common to the height series and to those of a few climate indices—namely, the Arctic Oscillation (AO), the North Atlantic Oscillation (NAO), the East Atlantic (EA), and the multivariate El Niño Southern Oscillation (ENSO) index (MEI)—were also investigated. We found significant correlations only with the MEI. The first height PCA mode of variability, showing a nearly 5-year fluctuation, was anticorrelated ( − 0.23) with MEI. The second mode, characterized by a decadal fluctuation, was well correlated (+0.58) with MEI; the spatial distribution of the correlation revealed, for Europe and the Mediterranean area, height decrease till 2015, followed by increase, while Scandinavian and Baltic countries showed the opposite behavior. Introduction A multiplicity of processes is responsible for the vertical movements of the land occurring at different spatial and temporal scales and with different magnitudes. We may recall solid-Earth tides, the glacial isostatic adjustment (GIA), the loadings, such as the pressure exerted by the atmosphere, the liquid water and the snow, seismic activity, volcanic eruptions, sedimentation, landslides, and ground fluid exploitation. Except for tides, these motions are typically an order of magnitude smaller than the horizontal deformation [1]. Therefore, the identification and description of their distinctive nature has always been challenging. Today, new observational capabilities allow monitoring, in a global reference system and with a high degree of accuracy, both horizontal and vertical velocities of stations located on the Earth's surface. Space geodetic observations, such as those acquired by GPS systems (Global Positioning System) and by InSAR imaging (Interferometric Synthetic Aperture Radar), have enabled significant advances in the description of the processes influencing vertical surface deformations. In the framework of studies conducted to assess the consequences of climate variability/change, the reliable quantification of the vertical deformation is key to resolving the contribution of the different aspects. Vertical deformations occur on a wide range of temporal scales from millions of years to seconds and can be due to global geophysical processes but also to local causes. As an example, the last glacial maximum occurred about 20 kyr BP; however, the surface of the Earth still rebounds visco-elastically to the subsequent melting of the ice load. The vertical movements of the land associated with this process, known as GIA (glacial isostatic adjustment), are clearly recognizable in different types of records. Rapid deformation, within seconds, takes place locally in conjunction with earthquakes and volcanic activity. At annual scale, Blewitt et al. [2] and Blewitt and Lavallée [3,4] showed that the most significant vertical displacements of the Earth's crust are driven by environmental mass redistribution generating changes in the gravitational and surface forces. The stress response in the solid Earth generated by changes/variations of surface atmospheric pressure (SP), terrestrial water storage (TWS), and of the oceans is usually accompanied by patterns of surface deformation [5,6]. Accurate monitoring of the vertical motions of the crust is now possible, thus contributing to advancing the understanding of the related dynamic processesimportant in the light of the impacts of climate variability/change on our planet, which are becoming more and more dramatically evident. It is well recognized that atmospheric pressure loading causes deformations of the land surface; the induced annual vertical displacements can be as large as 18 mm in mid-to high-latitudes [7][8][9]. The TWS is also a significant source of loading on the Earth's crust. It is the loading induced by the sum of all waters on the land surface and in the subsurface, including water stored in the vegetation [10]. TWS can cause vertical displacements between 9 and 15 mm over most of the continental areas [11]. Earth tides and ocean loading also play a pivotal role when dealing with vertical crustal displacements. In the case of ocean loading, the forcing is due to both the tidal and non-tidal component. Deformation of the sea floor and surface displacements of the adjacent lands up to several centimeters result from the elastic response of the Earth's crust to ocean tides (tidal loading) [12]. The ocean is also responsible for the so-called non-tidal ocean loading [13][14][15]. These changes of the ocean bottom pressure are due to different processes-namely, the internal mass redistribution of the ocean driven by atmospheric circulation, the global water cycle, and a change in the integrated atmospheric mass over the ocean areas [14]. In general, the seasonal variability due to the superposition of the environmental loadings described above is the most prominent short-period feature characterizing the GPS height time series. Modelling of the environmental loading series made by GFZ (GeoForschungsZentrum, Potsdam, Germany) and EOST (École and Observatoire des Sciences de la Terre, Strasbourg, France) indicates average annual amplitudes of 2.7 and 3.1 mm, respectively, explaining about 40% of the annual amplitude of GPS height time series [16]. Long-period signals of tectonic nature contribute to the observed height variability, which may also be affected by the consequences of anthropogenic activities. In this work, we analyzed the residual heights time series of 114 GPS stations distributed over Europe and the Mediterranean area. Residuals were the series of the GPS height estimates after having removed the relevant seasonal signal and the linear trend. The purpose of the work was to identify, by means of PCA (principal component analysis), the main modes of interannual variability in the residuals of the GPS heights, SP, and TWS. The SVD (single-value decomposition) technique was also used with the aim of studying the coupled variability between the height residuals and those of the SP and TWS. The correlations between the vertical deformations and the multivariate ENSO (El Niño Southern Oscillation) index (MEI) were also investigated. The possibility of disentangling and interpreting the effects of the different geophysical processes is crucial for providing insights into the evolution of the increasing stress put on the Earth by changing climate. During the past two decades at least, many studies have been published describing the seasonal variability of the GPS-estimated heights in response to the force exerted by different types of environmental loads. Among others, recent contributions include the following [11,[15][16][17][18]. Long-term signals due to different natural and anthropogenic processes may also characterize the GPS height time series [1,19]. A recent work by Springer et al. [20] assessed hydrological loading, even at daily time scale, in GPS height time series over Europe. Not as many studies are yet available for Europe and the Mediterranean area concerning interannual variations of GPS heights in relation to changing climate and variability of environmental parameters. Over southern India, Tiwari et al. [21], comparing deformation derived from GRACE (Gravity Recovery and Climate Experiment) and GPS data, suggest that hydrological variations are the major cause of vertical deformation measured by GPS at seasonal and interannual time scales. In southwestern USA, Jin and Zhang [22] found consistency over the six years from 2008 to 2014 between the interannual TWS changes derived from GPS heights and the pattern of precipitation, which also included the severe drought in 2012. In the USA, Adusumilli et al. [23] found positive correlation between TWS anomalies and the El Niño/Southern Oscillation in the southeastern Texas-Gulf and South Atlantic-Gulf watersheds and an unexpected negative correlation in the southwest. Accelerating uplift in Iceland resulting from climate change was found by Compton et al. [24] from the analysis of GPS observations over the period 1995-July 2014. Zerbini et al. [25], using the empirical orthogonal function (EOF), identified spatially coherent patterns in the GPS height time series of 19 stations located in Europe and the Mediterranean area over an 11-year period (1999-2009) and in those of SP, TWS, and GRACE surface mass anomalies. This study benefited from the global development that occurred during the past twenty years, which led to the installation of a very many permanent GPS sites and to the public availability of accurate time series of the coordinates. Over the continental area object of the study, our analysis identified coherent height variations with timescales of about 5 and 10 years, which could be related to the space and time variability of the SP, TWS, and MEI. The observed height variations are explained by crustal loading induced by mass variations. The entire study area behaved coherently over the 5-year period, while the spatial pattern of the decadal fluctuation was characterized by a north-south gradient. This is likely attributable to the strong 2015-2016 El Niño event and to the associated hydroclimate anomalies that in the European-Mediterranean area are, in general, described by a north-south path. The results of the SVD analysis of the height and SP elucidate the different response to the same SP forces of inland and coastal sites, with the former showing larger effects. The second SVD mode between height and TWS shows a nearly decadal variation, which was not found in the SVD results of the pair height and SP, suggesting that the observed decadal variation of the height was due to the TWS variations rather than to those of SP. The spatial distribution of the correlation coefficients between height and MEI identified two coherent regions, the southwest, where height and MEI are anticorrelated, and the northeast, where they are correlated. The second height time component turned out to be well correlated with MEI over the decadal time scale. Materials and Methods There are several parameters of interest for this study. First, we discuss the heights (Up local coordinate) of the 114 GPS stations identified for this work. These were rather uniformly distributed in the European and Mediterranean area; Figure 1 shows the location of the stations. In the second place, we introduce the SP and TWS at the same sites. For each site, weekly time series of these parameters were created. GPS Up Time Series The daily values of the GPS Up coordinate time series were obtained from the Nevada Geodetic Laboratory (NGL) at their web site (http://geodesy.unr.edu/, accessed on 14 July 2020, ref. [26]). We downloaded the latest release labelled GipsyX-1.0/IGS14/Repro3.0. A first check of the data series showed that, over the area of interest to the study, many stations started to acquire data continuously around 2010. The next step was to inspect the GPS Up data, starting from 2010, to check the completeness of the daily time series using the following relationship C = (N/τ) where N is the number of daily data, and τ is the period of activity in days. The completeness threshold was set to C = 92%. This was an arbitrary choice; however, it represented a reasonably good compromise between discarding too many stations if the percentage was set higher, versus accepting many more stations but with a lower percentage of completeness. The selected stations started to acquire data at different epochs; thus, the subsequent action was to cut the time series over the period of maximum overlap to favor the application of the PCA and SVD methodologies. This led to selection of the time span from 9 June 2010 to 5 September 2018. Outliers were removed from the data series using a 3-σ rejection criterion, which identified as outliers those observations deviating from the mean by an amount equal or greater than 3 times the standard deviation. Linear trends were then estimated for each series and removed from the data sets, thus creating residual time series for each station. The estimated linear trends should have accounted for the long-period tectonic and anthropic deformation. The GPS coordinates time series, in particular the Up component, can be characterized by sudden jumps. These are offsets or discontinuities that it is necessary to account for and remove because they would have a detrimental effect on the estimate of the stations position and velocity. A large percentage of offsets, about 66% (http://sopac.ucsd.edu/, accessed on 14 July 2020, ref. [27]), were due to well-known causes, thus allowing the identification of the epoch at which the jump took place. The NGL provided, for each site and for specific causes, the epochs at which discontinuities occurred. The specific causes were equipment changes, earthquakes, and change of reference frame. For jumps of undetermined origin, the epoch of occurrence must be properly estimated. In this work, the epoch and magnitude of the observed discontinuities in the residual Up time series were estimated by means of the STARS (Sequential t test Analysis of Regime Shifts) methodology [28]. Once the discontinuities were removed, the residual Up series were deseasonalized after estimating, by stacking of the daily values, a mean seasonal cycle for each station. Finally, weekly mean values were computed. Atmospheric Pressure Time Series The atmospheric pressure data consisted of surface pressure (SP) time series of the NCEP Daily Global Reanalyses over the period 2010-2019 on a 2.5 • × 2.5 • grid covering the latitudinal range 25 • N-70 • N and the longitudinal range 30 • W-60 • E. The daily SP values are given in hPa. Data were provided by the NOAA/OAR/ESRL PSL, Boulder, CO, USA, from their Web site at https://psl.noaa.gov/ (accessed on 14 July 2020) [29]. We recall here that a reanalysis is a systematic approach to produce data sets for climate monitoring and research. It uses an unchanging data assimilation scheme and model ingesting all available observations every 6-12 h, thus providing a dynamically consistent estimate of the climate state at each time step. We interpolated the SP data in order to obtain pressure values at the locations of the 114 GPS sites shown in Figure 1. The resulting time series were detrended and deseasonalized. Finally, weekly means were computed. Terrestrial Water Storage Time Series The TWS represents the summation of all water on the land surface and in the subsurface. It includes surface soil moisture, root zone soil moisture, groundwater, snow, ice, water stored in the vegetation, and river and lake water [10]. The TWS data set used in this work was the M2T1NXLND which was one of the products of Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2), i.e., the project that places the NASA Earth Observation System (EOS) suite of observations in a climate context [30]. These data are available on the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) Web site at https: //disc.gsfc.nasa.gov/ (accessed on 14 July 2020). We downloaded a data series of daily means with spatial resolution 0.5 • × 0.625 • spanning the period 2010-2019. The daily time series were detrended and deseasonalized, and weekly mean time series were estimated. These data were then interpolated in order to obtain values of the TWS at the GPS locations. Climate Indexes We investigated possible correlations between height variations and climate indexes, such as MEI, Arctic Oscillation (AO), North Atlantic Oscillation (NAO), and the East Atlantic Pattern (EA). The MEI combines both oceanic and atmospheric variables in a single index to provide an assessment of the ENSO (El Niño Southern Oscillation). This is a periodic fluctuation (2-to-7 years), across the equatorial Pacific Ocean, of the sea surface temperature (SST) and the air pressure of the overlying atmosphere. The ENSO consists of the alternation of two phases: a warm phase called El Niño and a cold phase called La Niña. It is the time series of the leading combined EOF of five different variables-namely, the sea level pressure, the sea surface temperature, the zonal and meridional components of the surface wind, and the outgoing longwave radiation over the tropical Pacific basin ( https://www.psl.noaa.gov/enso/mei/, accessed on 14 July 2020). The AO, NAO, and EA indexes describe major modes of variability of the atmospheric pressure field. In particular, AO accounts for the Northern Hemisphere field, and NAO and EA more specifically for the North Atlantic pressure field. PCA and SVD Methodologies The methodologies adopted to derive the main patterns of the space-time variability and co-variability of the various parameters were PCA and the SVD. PCA is a statistical method used for the analysis of the spatial and temporal variability of an individual dataset, and it is widely used in the geophysical environment. The basic concept on which the technique works is to reduce the dimensionality of a dataset by providing a compact description of the temporal and spatial variability of the dataset of a single variable in terms of orthogonal components (statistical modes), while preserving as much statistical information as possible. A review and recent developments on this subject are provided by Joliffe and Cadima [31]. In principle, PCA requires complete data sets; that is, all the time series should be defined at the same epochs. Considering that the great majority, if not all, the GPS series were characterized by missing data, this would have led to a massive loss of information and would have reduced the ability to detect common patterns. Therefore, in order minimize the data loss, we decided to fill the data gaps. The simplest approach to perform this task is to provide values derived by the time averaging of the series. Other methods are based on iterative algorithms, for example, those of Papoulis and Gerchberg [32,33] and the expectation maximization algorithm [34], which are among the most used approaches. However, the iterative characteristics of these methods, with the relevant computational burden, and the low convergence rates preclude their use in several applications. Among the available GPS Up time series [26] for the area of interest of this study, we selected those in which the longest data gap was two months. The missing weekly means were estimated by using an adjacent averaging procedure. The time window was, of course, an arbitrary choice, and we believe that it was quite appropriate for our work since we were looking for interannual variability common to the time series. However, we point out that only three time series were characterized by data gaps longer than 1 month. The variables analyzed using the PCA approach were the residual series of the GPS Up, the SP, and TWS. The SVD method, which has the same mathematical basis of PCA [35], allows the coupling of different fields to be explored by identifying significant correlations between pairs of variables. The approach enables extracting orthogonal components that are common to both variables, therefore representing modes of coupled variability. We compared the interannual variations observed in the residual series of the Up coordinate of the 114 GPS stations with those present in the residual time series of the SP and TWS. We shall remark that PCA and SVD are mathematical tools providing common modes and statistical correlations between pairs of parameters, respectively. Therefore, these methodologies do not allow direct inference of the physical mechanisms responsible for the observed behaviors, which should be unraveled by means of appropriate modelling. Results of the PCA Analysis In this section we present the results of the PCA analysis, performed on the residuals series of the GPS Up coordinate, SP, and TWS. The three data sets were organized in three matrices where each column was a detrended, deseasonalized and standardized time series of weekly values. The analysis allowed the spatial pattern coefficients (Figures 2, 4 and 6) and the time components (Figures 3, 5, and 7) to be obtained. The maps of the spatial pattern coefficients of the three data sets were created by assigning the PCA-derived value to the station points on the map. For display purposes, the spatial pattern coefficients were multiplied by 100 because they were always smaller than the unit. The series of the time components were smoothed by means of a 4 weekly data points (1 month) running mean. Spatial Patterns and Time Components of the GPS Up Residuals The first four modes explain 54% of the total variance, they are listed in Table 1. Figure 2a,b, presents the spatial behavior of the first two modes of the residuals of the GPS Up component. The first spatial pattern, presented in Figure 2a, shows a coherent behavior of Europe, Scandinavia, and the Mediterranean area (coefficients of the same sign). On the Atlantic side, the Azores Islands and Iceland show coefficients close to zero. Figure 3a presents the first time component describing the main interannual variations of the GPS Up coordinate residuals. These are characterized by large oscillations that may be interpreted in terms of loading variations on the Earth's crust occurring in connection with variations of essential climate variables (ECV), such as surface pressure, temperature, precipitation, and land groundwater [36]. The first mode of variability explains about 33% of the total variance (Table 1) which is a significant amount, if one considers the large number of stations (114). In the following, for a few variables, we highlight main anomalies observed during the years of this study in the effort to recognize fingerprints of these anomalies in the residual series of the Up time component. The year 2011 was a generally warm year all over Europe, the British Isles, Scandinavia, and the Mediterranean area. The year had a warm start and finish, with above-average temperatures in January and February and during the months of September, November, and December [37]. During February-April 2011 there was also a significant rain deficit over large parts of Europe, and similar conditions occurred in autumn. In 2011, the first time component of the Up residuals is characterized by a clear oscillation showing an Up increase since the start of the year, likely associated with unloading of the crust. In December 2011, drought conditions were confined to the Mediterranean area; however, from January to March 2012, the drought period first spread to Western Europe and then on to Central and Southeastern Europe where it peaked in March [38]. The Up residuals show a steep increase till about the second half to April. Although the year 2013 was also anomalously warm over Europe, it was characterized during spring by extreme precipitation in the Alpine region and in Austria, Czech Republic, Germany, Poland, and Switzerland. Great Britain experienced the coldest spring since 1962, and Spain had the wettest March since 1947 [39]. The Up residuals do not exhibit any clear behavior during the whole year. The year 2014 was the warmest year on record in 19 European countries. France, Spain, and Portugal experienced above-average temperatures in January, and all over Europe, February and March were characterized by exceptionally warm and wet conditions. Annual rainfall was above average for several countries in Europe and in the Balkans [40]. This might explain the large oscillation observed at the beginning of 2014, with a noticeable increase of the Up component (crustal unloading) till the second half of February, followed by a sharp decrease (crustal loading) due to excess of precipitation and related increase of groundwater storage. During 2015, heatwaves affected Central and Eastern Europe from May through September. The months of November and December were also unusually warm [41]. During summer, large portions of continental Europe were affected by one of the most severe droughts since 2003 [42]. The Up residuals display a clear oscillation, peaking during summer, likely associated with unloading of the crust. Western and Central Europe were again affected by a record-breaking drought from July 2016 to June 2017 [43], as well as many parts of the Mediterranean region [44]. The winter of 2017 was the second driest winter in the ERA-Interim record in terms of precipitation [45]. Additionally, during 2018 large parts of Europe were affected by exceptional heat and drought through the late spring and summer [46], with a significant increase of the Up residuals till the second half of March. However, by examining all together the nine-year period 2010-2019 of the Up residuals shown in Figure 3a, we can observe both variations related to significant weather and climate events of a particular year, as described above, and also a nearly 5-year oscillation that might be associated with the sequence of severe droughts that affected the study area. In fact, the GPS Up residuals show a marked increase during the three-year period 2010-2012 (crustal unloading, droughts 2010, 2011, and 2012), followed by a period of two years (2013 and 2014) during which an Up decrease is apparent and again a steep increase starting from 2015 (crustal unloading, droughts 2015, 2016, 2017, and 2018). Figure 2b presents the second spatial pattern, characterized by a south (negative coefficients)-north (positive coefficients) gradient. This mode explains almost 12% of the total variance (Table 1), which is about one-third of the first one. The second time component shown in Figure 3b is characterized by a nearly decadal oscillation, with change of slope in 2015 and superimposed shorter-period variations. The behavior of this time component might be related to decadal impacts of the ENSO phenomenon. Although clear associations of European hydroclimate anomalies with extreme El Niños are still a subject of debate [47], there are studies showing that, in Europe, the ENSO climate impacts are generally characterized by a north-south path [48]. In particular, concerning precipitation, El Niño is connected to negative anomaly in Scandinavia and positive anomaly in Southern Europe. For La Niña events, these relationships are close to symmetric. During the period of our study, the time series of the MEI shows a strong La Niña event in 2010-2011 (positive precipitation anomaly in Scandinavia and negative anomaly in Southern Europe), followed by a moderate event in 2011-2012 gradually weakening till the onset, at the beginning of 2015, of a strong El Niño lasting for about two years (negative precipitation anomaly in Scandinavia and positive anomaly in Southern Europe). The pattern exhibited by the second Up time component in Figure 3b is compatible in terms of loading/unloading effects on the crust with this scenario. Southern Europe and the Mediterranean are characterized, in fact, by negative coefficients, as illustrated by Figure 2b, indicating decrease of the Up from 2010 till 2015 (weakening of La Niña), followed by an Up increase (unloading) in the remaining period related to the strong El Niño event. Figure 2b indicates that Scandinavia, or more generally, the northeast (positive coefficients) shows increasing Up (weakening of La Niña), followed by a decrease after 2015. In brief, the Up coordinate and hydrology appear to be connected to a significant extent. The first mode of the Up variability is related to local hydrological changes on seasonal to interannual time scales. The second mode appears to be related to hydrological variations modulated by the ENSO. Table 2 lists the first four modes of the SP residuals, which explain about 90% of the data variability; the first mode alone contributes 50%. Before being analyzed with the PCA methodology, the SP time series were detrended, deseasonalized, and finally standardized. Figure 4a,b and Figure 5a,b present the spatial patterns and the time components of the first two modes, respectively. Figure 4a illustrates the map of the first spatial pattern coefficients, showing over the Atlantic side the meridional pressure difference between the Icelandic Low (positive coefficients) and the Azores anticyclone (slightly negative coefficients). These correspond to the two poles of the NAO. The north-south pressure gradient is also clearly identified by two coherent areas, one including the British Isles, Central Europe, the Mediterranean, the Balkans, and southern Scandinavia characterized by negative coefficients, and a second one with Iceland and central and northern Scandinavia characterized by slightly positive coefficients. Figure 4b shows the presence of a southwestnortheast gradient related to opposite pressure variations between the Mediterranean regions and Scandinavia. Table 3 lists the first four spatial patterns of the TWS residuals, explaining 60% of the variance of the data set. Figure 6a presents a map of the coefficients of the first spatial pattern; it shows that the coefficients are positive all over Europe. In particular, the stations in Central Europe are characterized by a larger magnitude of the coefficients. Figure 7 shows the first two time components of the TWS residuals. Both time series were smoothed by means of a 4 weekly data points (1 month) running mean. Table 3. Percentage of variance explained by the first four modes of variability of the terrestrial water storage (TWS) residuals. Before being analyzed with the PCA methodology, the TWS time series were detrended, deseasonalized, and finally standardized. Modes Variance (%) The first time component, Figure 7a, explains almost 30% of the variance (Table 3); it is characterized by large oscillations with period of about 2 years. A peak value can be recognized at the end of 2010, followed by a minimum during the first few months of the 2011. The year 2010 was a very wet year in large parts of Central and Southeastern Europe and adjacent areas of Asia, with parts of the region experiencing rainfall 50% or more above normal [50]. The maximum occurred after a period of heavy rainfalls that started in July 2010 and ended in December 2010. The spring of 2011 was particularly dry in the western part of Europe, many areas of which received less than 40% of usual annual precipitation [38]. In December 2011, drought conditions were basically confined to the Mediterranean area. During spring 2012, much of Europe was characterized by unusual warmth and dry weather peaking in March, i.e., when the minimum occurs in the first time component of the TWS residuals as shown in Figure 7a. However, a marked difference between Northern and Southern Europe was observed during 2012, with most of Northern Europe experiencing above-average precipitation, while Southern Europe experienced below-average precipitation [51]. The year 2013 was the sixth warmest on record across Europe, and many regions were warmer than average already at the start of the year [39]. Figure 7a shows loss of TWS during the whole year except for a short and small fluctuation at the end. The beginning of 2014 till March was also exceptionally warm in Europe, as evidenced by the yearly minimum of the first time component, but most of the year in Europe was characterized by rainfall above average [40]. During 2016, precipitation was close to average over most of Central and Western Europe, with a very wet first half of the year contrasting with a dry second half. December was also extremely dry, with many areas having less than 20% of normal precipitation [52]. Finally, the figure shows, for the year 2017, a rapid decrease until about March in conjunction with temperatures well above average throughout the year but with the strongest anomalies early in the year, from January to March. A marked increase then follows, likely because the most extensive area with annual rainfall above the 90th percentile in 2017 was in Northeastern Europe, extending as far west as Northern Germany and Southern Norway [44]. Results of the SVD Analysis In this section, the interannual variations observed in the residual time series of the GPS Up coordinate of the 114 stations are compared, by means of the SVD approach, with those present in the residual time series of the SP and TWS. The SVD approach allows recognition of significant correlations between pairs of variables; more specifically, the SVD analysis of two data fields identifies only those modes representing coupled variability. Each mode is described by two spatial patterns: one for each variable and two time components. In the following, we describe the results of the analysis for the pairs Up-SP and Up-TWS. Spatial Patterns and Time Components of the Residuals of the Up-SP Pair The first four modes of the pair SP and GPS Up coordinate account for 84.5% of the total covariance. The first mode alone of the coupled variability explains 52% of the total covariance (Table 4). . Both spatial patterns are coherent over the study area, with SP characterized by negative coefficients and the Up coordinate by positive coefficients. This mode identifies anticorrelation between the SP and the Up time series, representative of the vertical crustal deformation induced by atmospheric loading. In particular, in Central Europe, Figure 8b shows larger positive values of the coefficients than those of the coastal areas. This can be explained by the different response of coastal and inland sites to the same pressure forcing. Larger effects of SP loading are expected in continental interiors [53,54]. Figure 9 presents the first SVD time components, where a 5-year oscillation can be recognized. A similar feature is also identifiable in the first time component resulting from the PCA analysis of the Up residuals, as shown in Figure 3a. The second coupled mode of variability explains 19.43% of the total covariance. Figure 10 illustrates the second SVD spatial patterns of the SP (panel a) and GPS Up (panel b) residuals, respectively. Both spatial patterns show a clear south-north gradient, likely due to the SP difference between southern and warmer regions (high SP) and northern and cooler areas (low SP). The two fields are anticorrelated, thus supporting the response mechanism of the Earth's crust to loadings. Figure 11 presents the second SVD time components, which are mostly characterized by short-period variability. Table 5 lists the first four SVD modes of the coupled variability of the pair TWS and GPS Up residuals. They account for 51% of the total covariance. The first mode explains 20% of the total covariance. Figure 12a presents the first SVD spatial pattern of the TWS, characterized by negative coefficients all over Western and Central Europe, the Mediterranean, and the Balkans, while Scandinavia, Baltic countries, and Western Russia show positive coefficients. Figure 12b describes the first coupled mode of the GPS Up, which exhibits opposite behavior with respect to that of the TWS. The observed anticorrelation suggests that this mode is likely representative of the vertical deformation induced by the TWS loading on the Earth's crust. The second coupled mode of variability explains 15% of total covariance. Figure 14a features the second spatial pattern of the TWS residuals, characterized by negative values in Eastern Europe, Baltic countries, Western Russia, and central-northern Scandinavia. Elsewhere, the coefficients are mostly slightly positive. Figure 14b shows the second spatial pattern of the Up coordinate exhibiting an opposite behavior with respect to that of the TWS, thus further supporting the idea of the loading effect on the Earth's crust exerted by variations of the TWS. Figure 15 presents the second SVD time components, which are characterized by a parabolic variation (decadal period), with superimposed interannual fluctuations. Additionally, quite noticeable is the large fluctuation during the years 2016, 2017, and 2018, when climate warming caused record Northern Hemisphere average temperatures [55], and the European-Mediterranean area was affected by severe droughts. A long-period oscillation of similar shape does not appear in the SVD time components of the Up-SP pair, suggesting that the observed behavior is mostly due to the impact of the TWS. GPS Up and Climate Indexes Because in our study no significant correlations were found between the Up component and the AO, NAO, and EA indexes, in this section we only describe the correlation with the MEI. We analyzed the correlation of the GPS Up residuals with the MEI because numerous studies have shed light on the association between precipitation in the European-Mediterranean region and the ENSO [56]. In order to reduce the potential effect of local anomalies, the GPS Up residuals were represented using the first two modes of variability, identified in Section 3.1. Since MEI is provided as a series of monthly values, monthly Up residuals were also estimated. Figure 16 presents the monthly MEI time series made available by NOAA. Figure 18 shows the spatial distribution of the correlation coefficients between the MEI time series and those of the Up coordinate. The Up time series were reconstructed by means of the first two modes of variability (accounting for about 44% of total variance, see Table 1) of the PCA with the aim of avoiding possible disturbing signals induced by local effects. The grey dots identify those stations whose time series are not significantly correlated (p > 0.05) with MEI. The correlation map identifies two areas, one including Iberia, the Mediterranean, and Central and Northern Europe, where anticorrelation is clearly identifiable, and a second zone encompassing Scandinavia, Western Russia, and Baltic states, characterized by positive correlation. Figure 18. Spatial distribution of the correlation coefficients between the MEI and the Up coordinate time series. The Up time series were generated by using the first two components of the PCA. The grey dots represent those stations whose time series are not significantly correlated with the MEI. Interannual Vertical Deformations and Variations of SP and TWS The Earth's crust undergoes deformations of different nature. Recent studies have proposed methods to extract common mode components from GPS coordinates time series with the aim of identifying spatial and temporal patterns of certain signals [54,57]. Of increasing interest are signals related to climate variations/changes. Hence, it is important to identify interannual variations and examine their possible attribution. Vertical displacements induced by loading of the crust are explained for environmental parameters, such as the SP and TWS. Using a PCA approach and eight years of data (2010-2018) from a network of 114 GPS stations, we investigated the interannual variability of the vertical deformation over the European continent, including the Mediterranean area, in relation to fluctuations of the SP and TWS. Main modes of variability of the vertical component were identified through the PCA analysis, with the first two modes explaining 44.3% of the total ( Table 1). The first mode shows a homogenous spatial behavior of Europe and the Mediterranean, with larger magnitudes of the coefficients around central-northern Europe and the Balkans. The behavior of the first time component, characterized by a 5-year period oscillation with maxima in 2012 and 2018 and minimum in 2015, can be explained in terms of loading variations, likely attributable to TWS. Evidence of this process is also provided by the result of the first SVD between the GPS Up and the TWS, which identifies two different spatially coherent behaviors-namely, that of Europe and the Mediterranean area in the center-south and Scandinavia, Baltic countries, and Western Russia in the north. The second time component reveals a decadal period suggesting Up decrease till about 2015, followed by increase in the south west of Europe, while the northeast shows an opposite pattern. The second SVD between the Up and the TWS substantiates this finding. The SP loading effect on the crust is also noticeable. The first SVD space and time components between the GPS Up and the SP clearly indicate an opposite behavior of the two fields over the entire study area. The second SVD spatial pattern confirms the opposite behavior of Southern Europe and the Mediterranean with respect to the north. The footprint of hydrological loading in GPS time series has been recognized. Another example is a study focusing on the Eastern Tibetan Plateau [58], which has shown interannual nonlinear signals in the common mode components of the GPS time series, predominantly related to hydrological loading. MEI and Vertical Deformations Several studies have underpinned the association between precipitation in the Mediterranean region and the ENSO. Shaman and Tziperman [56] have shown that interannual variability of fall and early winter (September-December) precipitation over Southwestern Europe (Iberia, Southern France, and Italy) is linked to ENSO variability in the eastern Pacific via an eastward-propagating stationary Rossby wave train. It has been documented [59] that, when El Niño is active, precipitation increases during late summer, autumn, and early winter in Western Europe and the Mediterranean region; however, during late winter and spring, the correlation is negative. The study also found spatially coherent patterns in Central and Eastern Europe, where the correlation is negative in autumn and positive during winter and spring. The outcomes of these studies corroborate our findings. In fact, the first time component of the TWS presented in Figure 7a shows precipitation increase in late summer, autumn, and early winter of 2014 and 2015. We recall that 2014 was characterized by the onset of a very strong El Niño that fully developed during 2015 and terminated in middle 2016. In Figure 17a, during the strong El Niño conditions, we observe anticorrelation between the MEI and the Up first time component during late summer and autumn of 2014 and early winter 2014-2015, while positive correlation is found during late winter and spring 2015. A similar pattern is recognizable during 2015-2016. This behavior can be explained with the loading/unloading process of the Earth's crust exerted by the increase/absence of precipitation. The outcomes of the SVD analysis of the TWS and the GPS Up, described in Section 4.2, agree with these results, which is expected since precipitation is among the main contributors to TWS. The pattern of the first time component of the vertical deformation shows a nearly 5-year oscillation, with a well-recognizable change at the beginning of 2014, when switching from a period of about 4 years of strong first and then moderate La Niña to a very strong El Niño. Figure 7b illustrates an approximately decadal fluctuation of the second time component of the vertical deformation peaking in middle March 2015, about six months before El Niño reaches its maximum strength in middle October 2015. Timescales like the ones observed in this study were identified by Cheng and Ries [60] when analyzing four decades of significant variations in the Earth's dynamical oblateness (J 2 ) derived from satellite laser ranging data. They explain a timescale of~2~6 years by the mass redistribution in the atmosphere and ocean associated with the ENSO events during the period from 1998 to 2016. The significant oscillation they find at~10.4 timescale can be described by existing models of atmosphere, ocean, and surface water changes only up to the level of~18%. However, they suggest that the observed decadal variation is a consequence of mass redistribution within atmosphere-ocean-hydrosphere associated with ENSO events since the observed variation is well correlated with a 5-year running mean of the ENSO index. Additionally, Chao et al. [61] investigated the variation of the Earth's oblateness J 2 on interannual-to-decadal timescales. They indicate contributions from the Antarctic Oscillation (AAO) and the AO for time scale shorter than 5 years and from the Pacific Decadal Oscillation (PDO) for timescale longer than 5 years. According to their findings, contributions from ENSO and the Atlantic Multidecadal Oscillation (AMO) are absent. For the 10.5-year signal, they suggest a non-climatic origin-namely, the solar cycle, although this apparent correlation is presently uncertain. Conclusions The time series of the vertical movements of the Earth's crust contain signals due to the evolution of geophysical and climatic processes. This study shows evidence, over Europe and the Mediterranean area, of interannual and longer period variability of GPS-derived vertical deformations and of their relationship with the spatial and temporal variability of environmental parameters, such as TWS, SP, and the MEI climate index. The GPS heights and the environmental parameters data series were analyzed using a PCA approach, further correlated by means of the SVD technique. The first two modes of variability of the height were also correlated with the MEI index. The first and second time component of the height residuals, responsible for more than 44% of the observed variance, show a 5-year and a decadal variation (9 years is the time frame of this study), respectively. Both curves exhibit superimposed shorter period variability. Over the 5-year timescale, the whole of Europe and the Mediterranean behave coherently, with Central Europe and the Balkans denoted by larger coefficients. The spatial pattern of the decadal fluctuation presents a north-south gradient. The observed height variations are explained in terms of loading variations on the Earth's crust, likely associated for the 5-year periodicity with the transition from a few years of strong and moderate La Niña to a very strong El Niño and to a sequence of severe droughts that affected the study area during 2010-2012 and again during 2015-2018. The decadal timescale can be related to the occurrence of the strong ENSO event and the associated hydroclimate anomalies that are generally characterized, in the European-Mediterranean area, by a north-south path. The retrieved pattern is compatible, in fact, with positive precipitation anomaly in Scandinavia and negative anomaly in Southern Europe related to a strong La Niña event (2010-2011), followed by a moderate event (2011-2012) weakening until the beginning of 2015 when a strong El Niño started lasting about one and one-half years. This last period was characterized by negative precipitation anomaly in Scandinavia and positive anomaly in Southern Europe. The short-period variations superimposed to both the 5-year and to the decadal period are related to specific weather and climate events. The spatial patterns found for the SP and the TWS time series are in good agreement with those of the height by showing for the first mode a coherent behavior of the study area and a north-south gradient for the second mode, which is particularly clear for the SP series. As for the TWS, coefficients of larger magnitude are present in Central Europe. A periodicity of about 2 years can be recognized in the first time component, while a decadal timescale shows up in the second. The SVD analysis between height and SP has clearly identified the anticorrelation between these two parameters, which is explained by the loading response of the crust to SP variations. The results also elucidate the different response, to the same SP forcing, of inland and coastal sites, with the former showing larger effects. A 5-year timescale is present in the SVD first time component. We observe a north-south gradient in the second spatial component; however, the relevant time behavior does not present any identifiable long-period feature. The coupled variability of height and TWS shows clear anticorrelation, explained by the loading mechanism. The study area is not coherent since an opposite behavior between north and south is observed. A 5-year oscillation can be recognized in the first SVD mode. The second mode of coupled variability, also showing anticorrelation, exhibits a nearly decadal variation which was not found in the SVD results of the pair height and SP. This suggests that the observed decadal variation of the height is due to the TWS variations rather than to those of SP. The comparison between the MEI index and the stations' height, represented by the first two modes of a monthly PCA analysis, shows quite a coherent pattern of anticorrelation in the large area encompassing Iberia, the Mediterranean, and central-northern Europe. Instead, more to the north, the region comprising Scandinavia, Baltic countries, and Western Russia is positively correlated. The comparison between the first and second time components and MEI sheds light on the height interannual variability due to climatic fluctuations-namely, those that may be associated with the ENSO phenomenon. The 5-year fluctuation present in the first time component is likely modulated by the sequence of a strong and a moderate La Niña, followed by the strongest El Niño of the last two decades. The large oscillations that characterize the years 2016, 2017, and 2018 are realistically due to the severe droughts that affected the study area. The decadal oscillation shaping the second height time component is well correlated with the MEI index behavior. The correlation is significant, p < 0.05, with a value of +0.58. Iberia, central-northern Europe, and the Mediterranean area experience height decrease till about the onset of the strong 2015-2016 El Niño event, followed by increase during the subsequent four years. The opposite behavior characterizes Scandinavia, the Baltic countries, and Western Russia. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-05-21T16:56:50.518Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "c781ba2ac41ce0e7a5058052a562298213a0b192", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/8/1554/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b5f14524459c926b9d5224a7ced9b6e763fd41b2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geography" ] }
96438868
pes2o/s2orc
v3-fos-license
First Observation of an X-Ray Beam Following a New Geodesic When Gravitational Waves Deform Space-Time By using X-rays of a linear accelerator (LINAC Siemens X rays, 6 MeV) for medical use, we were able to measure gravitational waves, GW, (amplitude = 56.385mm, frequency = 1/3Hz, velocity = c and polarization) and its threedimensional effect on X-ray trajectories. The collimated X-ray beam, which is in the plane (X,Y), travels on the Z axis at the speed of light in air and passing through the machine isocenter, until it reaches the target and, ultimate, is recorded in a radiographic film. Apparently, there is an exceptional coincidence in the operation of LINAC and the presence of GW. This coincidence occurred in VIRGINIA, GPS (38.634 351 1, -77.282 523 9), UTC (12/06/2011: 12: 56: 01). This important event, but not sui generis, was recorded in the LINAC computer system, on a film for radiography, in the log file of the cancer treatment center and it was reported to SIEMENS in order to try to find an explanation of a possible hardware failure, some abnormality or any software issue. The physicist and Siemens service engineer on site concluded that such event should never happened because LINAC was not malfunctioning. Consequently, for the X-rays, there was a deviation of the isocenter of the LINAC (∆X = (11.5 ± 0.5) mm, ∆Y = (48 ± 0.5) mm), by the action of the amplitude of GW. The tolerance of a LINAC is lower than these measurements, and the equipment will stop working if they are greater than ±1.0mm for isocenter (zero position) and ±2.0mm for other collimator leaf positions. Therefore, this constitutes a register of space-time alteration with a consequent variation of the path of the X-ray beam. Finally, the registered gravitational waves leave invariant the angle between the axes (X,Y), of the X-ray beam, indicating a constant polarization. Introduction The detection of gravitational waves, GW, is a collaborative achievement of this century, and it will mark the new challenges of astrophysics and future astronomy (Abbott, B. P. et al., February 2016, June 2016).Emblematic projects such as LIGO and VIRGO are examples of the new way that the scientific community is working, (Abbott, B. P. et al., 2018(Abbott, B. P. et al., , 2018(Abbott, B. P. et al., A, 2018 C) C), where some questions are resolved in a multidisciplinary way, not only in the philosophical conception or the methodology of research, but also, about the use of equipment and infrastructure that modern society and technology have at their disposal, including: nanotechnology embedded in hardware and software devices in cancer treatment equipment (Xoft, LINAC), low energy X-ray spectroscopy, X-ray telescopes (Chandra, Hitomi, Newton), nuclear magnetic resonance in Quantum Computation, catalysis and plasma in petroleum refining, among others (Abbott, B. P. et al., June 2017, October 2017), (Piasetzky, Sargsian, Frankfurt, Strikman and Watson, 2006). The precision of the LIGO experiments (Livingston, Hanford) constitutes an international reference in laser light interference in gravitational waves detection (Ciufolini, 2007), ( Álvarez-Samaniego, W. P., Álvarez-Samaniego, B. and Moya-Álvarez, 2017), (Hawking and Israel (Eds.), 1979), with the following characteristics: 1. Interferometric laser system with two perpendicular arms under vacuum conditions together with an optical path of 4Km in Livingston and 2Km in Hanford.It is based on detecting gravitational waves through tiny movements they produce in mirrors, which results in the generation of a diffraction pattern in the interferometer signal. 2. GW originating millions of light years from Earth distort the surfaces of mirrors in interferometers about 10 −18 m (the proton has a size of 0.843 × 10 −15 m) (Abbott, B. P. et al., February 2016, 2018, 2018 A). 3. The duplication of readings from two different observatories allows us to identify false detections produced by local effects such as small seismic disturbances or an instruments failure. 4. The construction of Advanced LIGO was completed in February 2015 and its scientific mission began in September of that year, with a sensitivity four times greater than the initial design. The fundamental fact about LIGO detection is about the space-time-matter interaction, which appears as a single entity.This last can be explained later on this paper making use of the linearization of Einstein's equations.Therefore, after establishing the LIGO the basic elements of the measurement technique, we can formulate the question below: Is it possible to study GW through the alteration of the trajectory of an X-ray beam in the deformed space-time?The answer is yes, in Figure 1.LINAC Siemens elements light of the following premises or requirements: 1. We need to use a high-energy X-ray beam, to ensure that it travels at the speed of light in a vacuum, because its crosssection depends inversely on the energy (Rowshanfarzad, Sabet, O'Connor and Greer, 2011), (Litzenberg, Gallagher, Masi, Lee, Prisciandaro, Hamstra, Ritter and Lam, 2013). 2. We must establish a beam of X-rays, having a defined shape and with a reference system attached to the beam. 3. We must measure the interaction of the gravitational wave with the X-ray beam, to determine amplitude, frequency and polarization. 4. We must know the trajectory of the X-rays before the passage of the gravitational wave, in order to establish comparisons and differences in the space-time tissue. The unique technology that meets all these requirements is already implemented around the world for the treatment of cancer and it is called LINAC (Rowshanfarzad et al., 2011), (Sontag and Steinberg, 1999).In order to measure GW, we simply take the data from a functioning LINAC, during the passage of a gravitational wave. Equipment and Materials: Linear Accelerator Figure 1.LINAC Siemens elements.In the upper left figure, we can see the appearance of the LINAC Siemens, with a strong robustness and a weight of the order of tons.The upper right figure indicates the main elements of the LINAC head where X-ray radiation is produced and considered an X-ray source, collimation and multi leaf collimator (MLC) systems.The isocenter is the reference point for mechanical and radiation field which has a maximum tolerance , which produces the shape of the X-ray beam.This last also corresponds to the shape of the target.Each MLC leave is calibrated by an independent system that has a maximum tolerance of ±2mm. LINAC. SIEMENS PRIMUS 6 MV X Rays We will specify its important parts for the development of this experiment. X-ray source It is a source of X-rays, with an energy of 6 MeV, located in the position Z = 0. X-rays from this point will pass through the collimators and then through the patient's tumor or target and finally being recorded in a FILM. MultiLeaf collimator (80 leafs) The X bank contains the 80 sheets of the MLC and is 19.685cm from the X-ray source.The MLC bank is located below the Y-jaws bank. Y-jaws bank consists of two thick tungsten leaves located above the MLC. The isocenter reference grid is installed at 42.578cm. Radiography X-ray film is placed after target and It is perpendicular to radiation beam passing througth LINAC's isocenter. It is an exploratory technique that consists of subjecting a body or an object to the action of X-rays to obtain an image on a photographic plate.Image or photograph is obtained by means of this exploratory technique.Minimum irradiation time is 3s in order to obtain a good resolution and according to film's response curve.12/06/2011, 12/07/2011, 12/07/2011.We will explain why the presence of GW in the Earth is not an exaggeratedly improbable phenomenon, but rather it can be detected by linear accelerators that are used to treat cancer. 1.In the LINAC Gantry, the X-ray source, the Isocenter, the primary and secondary collimators are physically located, controlled and well defined. 2. Source of X-rays, is located in the upper part of the LINAC head. 3. The isocenter is unique, defined, constructed and operated in an exact manner from the start to the end of the lifespan of the LINAC. 4. Bank of tungsten collimators X and Y, which have an autonomous control system for each leaf.The geometric figure formed in the MLC will represent the shape of the tumor to be irradiated.5. Regarding the machine mechanical Isocenter, it has an accuracy less than 1mm with a tolerance of ±1mm. 6. X-ray recording FILM.It is located after the target at a distance of 115cm from the source of X-rays and perpendicular to the X-ray beam axis.The X-rays will travel through isocenter and target and ultimate are registered on a photographic film. Measurement of Amplitude, Frequency and Polarization of the Gravitational Wave The gravitational wave is characterized by: amplitude, frequency, polarization and speed.It is a phenomenon that alters space-time and travels at the speed of light in the vacuum.When a beam of X-rays fully defined in shape and size passes through the modified space-time, it undergoes a modification in shape and size.From this gravitational wave we can measure and/or calculate amplitude, frequency and polarization. The amplitude is measured through the displacements in the trajectory of the X-rays and it is equal to ∆X = (11.5±0.5)mm and ∆Y = (48 ± 0.5)mm. The frequency of the gravitational wave is measured indirectly by the response curve of the irradiated film.This last is true since in order to obtain an adequate constrast on film a minimum of 3s of irradiation is needed, so that the frequency is equal to v = 1/3Hz.This result agrees with the theoretical studies (Hawking and Israel, 1979) that establish that frequency of GW must be in the interval [10 −7 , 10 11 ]Hz. Polarization: Plus-polarized, Cross-polarized X-ray beam in reference to the Isocenter.The irradiated tumor has an identical shape previously defined, modeled and constructed in the MLC. It is a figure whose shape is given by tungsten collimators and Y-jaws.It has a defined isocenter and fully calibrated and verified every day before starting operation in any cancer treatment center, in particular, POTOMAC-RADIATION-CENTER. Triple control systems • The linear accelerator used to treat cancer is triple controlled by three independent systems to guarantee the dose given to the patient and delivered to the exact area or tumor volume.The LINAC have an accuracy less than 1mm and a tolerance of maximum ±1mm. • The irradiation process will start only when the calibration of the area to be irradiated is correct and checked by the LINAC systems. • If one of the systems that control the MLC is not in perfect alignment with the reference system or isocenter, then the LINAC automatically stops its operation until a non-perfect alignment is corrected. • The X-rays were properly recorded on the film showing a displacement in X and Y.This was not detected by any of the systems that control any signs of no alignment or displacement.The only possibility for this phenomenon to occur can be understood if we consider that something moved, at the speed of light in the vacuum, and disturbed the space-time of the X-ray beam. Results The X-rays came out of the source, crossed the collimator, crossed the tumor and arrived at the FILM, demonstrating that the isocenter and the equipment are properly calibrated and work perfectly. On the radiographic film, we can see that the isocenter is displaced by X and it is recorded.We measure the displacements and obtain a distance for ∆X = (11.5 ± 0.5) mm, and for ∆Y = (48 ± 0.5) mm. The source, the isocenter, the collimator, the FILM are aligned on the Z axis. The X-rays traveled at the speed of light, c, a length L = (1150 ± 10) mm at a time t = L c ≈ 1.15 c s, while the possible gravitational wave traveled at the speed of light the distance: ) 1/2 . The gravitational wave must have constant polarization. Discussion of Results We have shown that there was no instrumental error when taking the film (radiography) and therefore, the disturbance on film registration is due to the passage of a gravitational wave, which altered the space-time trajectory of the X-ray photons. Figure 5.The film variation of the isocenter of the X-ray beam The 80 Leaves Collimator Motion The collimator system defines the shape of the tumor has an autonomous control together with an independent system that drives each leaf.It is a mechanical device that works at speeds supremely lower than the speed of light (v/c = 92.5925× 10 −9 ) and it could never be relocated to define a new isocenter in less than 1/3 × 10 −9 ns, which was the time of duration of the phenomenon. The Isocenter Motion The isocenter is built and verified during the installation of the linear accelerator LINAC and it has three control subsystems, before carrying out an X-ray irradiation.When the LINAC is out of calibration or defective, these control subsystems stop the LINAC operation and simply does not irradiate. During the dates of the gravitational phenomenon, (12/06/2011, 12/07/2011, 12/07/2011), no anomaly was reported.However, due to the strange information recorded on film, the correct functioning was checked and the company SIEMENS was contacted to evaluate some type of abnormality in the equipment.No abnormality was detected.All the analysis work was duly recorded by the oncologist and the chief physicist. Impossibility of a new isocenter or double isocenter. Theoretical Implications The gravitational waves interacting with our planet Earth, do so at non-relativistic speeds, allowing the coupling of matter with space-time, creating a new fully coupled system that we call space-time-matter.This space-time-matter system obeys a coupled system of partial differential equations called Linearized Einstein's field equations (see ( Álvarez-Samaniego, W. P. et al., 2017)).This system can be obtained from the Einstein field equations, as an approximation of weak fields and for speeds much lower than the speed of light in vacuum (see ( Álvarez-Samaniego, W. P. et al., 2017)).It is also shown in ( Álvarez-Samaniego, W. P. et al., 2017) that there is an additional term for the space-time-mass density that corresponds to the curvature of space-time.According to the LIGO experiment, the gravitational waves originating on reaching the Earth distort the surfaces of the mirrors in the interferometers by 10 −18 m.This phenomenon has not been explained yet, nor it is understood how this interaction with the mirrors takes place.However, the physical explanation and the mathematical proof, given in ( Álvarez-Samaniego, W. P. et al., 2017), shows the existence of a space-time-matter coupling, given by the following system x, y, z, t) is the space-time-mass current density and ρ g = ρ g (m, x, y, z, t) is the space-time-mass density.The gravitoelectromagnetic field system (1.1)-(1.4) is equivalent to the Maxwell equations in a suitable approximation, thus showing a good analogy between the classical electromagnetic theory and Einstein's gravitational theory.Through this similarity, it is possible to establish a model for the quantization of gravity. Conclusions 1.The detection of GW is a very common experiment of daily life.It may affect cancer treatments and any device that uses ionizing radiation.Especially, it may alter particles that travel at speeds close to the light in a vacuum. 2. The only way to measure the amplitude of a gravitational wave is that simultaneously such a wave affects a conglomerate of photons (X-rays) in three dimensions, that is, in the polarization axes and in the propagation direction of the gravitational wave. 3. The GW deform the space-time and a sufficient period of time (3s) is needed for X-rays to pass through this deformation and undergo changes in the measurements of time and/or space.It became necessary to analyze a whole beam of X-rays, which form a closed surface of dimensions recorded in Figures 3, 4 and 5. From experiment characteristics and, mainly due to a space fixed mechanical isocenter, we were able to record on a film that indeed isocenter could largely move under the action of a gravitational wave. 4. Future experiments may measure other properties of GW.For example, geostationary satellites dedicated to the monitoring of GW.They will measure every second the travel time of laser light between geostationary satellites and fixed points of the earth.This could be possible for a minimum number of satellites, in such a way that the terrestrial surface is covered and, space-time variations can be inferred by the passage of GW. 5. Experiments with high-energy X-rays are convenient, due to their very small interaction cross-section which is in the range of femtometers.This guarantees that they travel at the speed of light in the vacuum and they can interact with GW in a direct way.The physical variables can be fully measurable, due to technological advances on cancer treatment devices, using x-rays. 6.The detection of GW is now a very common experiment of daily life, and it may affect cancer treatments and any device that uses ionizing radiation.Especially, it may disturb particles that travel at speeds close to the light in the vacuum. 7. The only way to measure the amplitude of a gravitational wave is when it simultaneously perturbs a conglomerate of photons (X-rays) in the three dimensional space.That is to say, in the polarization axes and along the direction of propagation of the gravitational wave. 8. In agreement with the scientific method, several possible causes of this space-time disturbance were analyzed, discarding a possible tectonic phenomena and some atypical astronomical event different from GW.It is sufficient to review the following pages to support the last assertion.Moreover, the equipment, used in this experiment that was installed for cancer treatment around the world (LINAC Siemens), does not work when there are tectonic, volcanic or electromagnetic phenomena that may alter the measurements and the treatment dose.Hence, it is not possible to think that any cause of this nature might have affected this experiment.indicate total normality between the radiation field of the LINAC and the Isocenter, where no displacement in X nor in Y is observed.This was verified, minute by minute throughout of the working day in the laboratory, without observing any abnormality in the primary control system and, having the values of 8:56, NORMAL; 8:57, NORMAL; 8:58, NORMAL; 10:46, NORMAL; 10:54, NORMAL. where G is the Cavendish gravitational constant, ρ is the mass density and → J is the current mass density. • On the other hand, the Einstein field equations are given by where for all i, k ∈ {0, 1, 2, 3}, R ik are the contravariant components of the Ricci tensor, g ik are the contravariant components of the metric tensor, T ik are the contravariant components of the energy-momentum tensor, R is the scalar curvature and c is the speed of light in vacuum.Using (2) and the approximation for weak non-relativistic fields, we obtain the following system of equations (see ( Álvarez-Samaniego, W. P. et al., 2017) for a complete proof): g ∂t is the space-time-mass density and → J g is the space-time-mass current density.We can notice that in the approximation of the non-relativistic weak field, the density ρ g is written as a multiple of the classical mass density ρ plus a term corresponding to the curvature of space-time, proportional to ∂E 0 g ∂t , which constitutes a relativistic correction to the Newtonian classical system, given by (1).Using the last system of equations (3), and considering empty space and weak gravitational fields, it is possible to obtain (see ( Álvarez-Samaniego, W. P. et al., 2017)) the following hyperbolic equations for the fields − → E g and → B g : (4) Figure 2 Figure 2. 3-D Radiation Beam Path through MLC or OPTIFOCUS MLC EQUIPPED Digital Linear Accelerator Figure 2 . Figure 2. 3-D Radiation Beam Path through MLC or OPTIFOCUS MLC EQUIPPED Digital Linear Accelerator.The interaction or perturbation measured by the action of GW occurs in the section between the MLC collimator located at Z = 19.685cmand the isocenter grid located at Z = 42.578cm.Of course, the gravitational wave also affects the trajectory from the MLC to the FILM and uniformly throughout the beam path.The figure of the center indicates the collimator, the isocenter and the X-ray beam without the presence of GW.While the figure on the right takes into account the presence of GW, where we it is possible to see how the displacement of the isocenter occurs. Figure 3 . Figure 3. First control system for a correct functioning of the LINAC Figure 3 . Figure 3. First control system for a correct functioning of the LINAC.Hours and records are indicated in LINAC report and dated 12/06/2011 in Washington, D.C.The figures correspond to records of gravitational abnormality, which indicate total normality between the radiation field of the LINAC and the Isocenter, where no displacement in X nor in Y is observed.This was verified, minute by minute throughout of the working day in the laboratory, without observing any abnormality in the primary control system and having the values of 8:56, NORMAL; 8:57, NORMAL; 8:58, NORMAL; 10:46, NORMAL; 10:54, NORMAL. Figure 4 . Figure 4. Second control system for a correct functioning of the LINAC Figure 4 . Figure 4. Second control system for a correct functioning of the LINAC.This second redundant control system indicates that the LINAC is working perfectly previous to the irradiation of the patients and explicitly on 12/06/2011 at (08:56:00) Washington, D.C. time.After this second verification the LINAC starts the X-ray irradiation to the patient. Figure 5 . Figure 5.We can observe in the film the variation of the isocenter of the X-ray beam, with respect to the X and Y coordinates.The upper figures indicate the isocenter placement before and after the passage of the gravitational wave.The lower figures indicate the displacement of the isocenter during the passage of the gravitational wave. Figure 4 . Figure 4. Second control system for a correct functioning of the LINAC.This second redundant control system indicates that the LINAC is working perfectly previous to the irradiation of the patients and explicitly on December 06, 2011 at (08:56:00) Washington, D.C. time.After this second verification the LINAC starts the X-ray irradiation to the patient. Figure 5 . Figure 5.We can observe in the film the variation of the isocenter of the X-ray beam, with respect to the X and Y coordinates.The left figure indicates the isocenter placement before and after the passage of the gravitational wave.The right figure indicates the displacement of the isocenter during the passage of the gravitational wave.Appendix B Non-relativistic approximation of a weak gravitational field
2019-04-04T16:20:33.969Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "9117b9fde1cce14b2f22819526a090280d20de0c", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jmr/article/download/0/0/38678/39386", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9117b9fde1cce14b2f22819526a090280d20de0c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258936951
pes2o/s2orc
v3-fos-license
Transcriptome Profiling of Prostate Cancer, Considering Risk Groups and the TMPRSS2-ERG Molecular Subtype Molecular heterogeneity in prostate cancer (PCa) is one of the key reasons underlying the differing likelihoods of recurrence after surgical treatment in individual patients of the same clinical category. In this study, we performed RNA-Seq profiling of 58 localized PCa and 43 locally advanced PCa tissue samples obtained as a result of radical prostatectomy on a cohort of Russian patients. Based on bioinformatics analysis, we examined features of the transcriptome profiles within the high-risk group, including within the most commonly represented molecular subtype, TMPRSS2-ERG. The most significantly affected biological processes in the samples were also identified, so that they may be further studied in the search for new potential therapeutic targets for the categories of PCa under consideration. The highest predictive potential was found with the EEF1A1P5, RPLP0P6, ZNF483, CIBAR1, HECTD2, OGN, and CLIC4 genes. We also reviewed the main transcriptome changes in the groups at intermediate risk of PCa—Gleason Score 7 (groups 2 and 3 according to the ISUP classification)—on the basis of which the LPL, MYC, and TWIST1 genes were identified as promising additional prognostic markers, the statistical significance of which was confirmed using qPCR validation. Introduction Prostate cancer (PCa) is one of the most common types of cancer among men worldwide [1]. The majority of PCa cases are diagnosed as having a localized form, which represents the early malignant process confined to the prostate gland without spreading beyond its borders. In the early stages of localized PCa (LPCa), symptoms may include minor changes in the urinary system or even no symptoms at all. The standard clinical diagnosis of LPCa includes various methods such as prostate palpation, measurement of levels of prostate-specific antigen (PSA), ultrasound/magnetic resonance imaging, and biopsy. LPCa often has a favorable prognosis, and there is a wide range of available therapeutic approaches (active surveillance, radiation therapy, radical prostatectomy, or their combinations) that can be chosen based on the patient's risk stratification for biochemical recurrence. One of the main systems for classifying risk groups in PCa is the D'Amico classification system, which defines three risk groups: low, intermediate, and high. This risk group assessment is based on three factors: the Gleason score (GS), PSA level, and stage of the disease. The low-risk group is characterized by a GS ≤ 6, PSA level < 10 ng/mL, and T1-T2a stage; intermediate risk by GS = 7, PSA level 10-20 ng/mL, and/or T2b-T2c stage; and high risk by GS ≥ 8, PSA level > 20 ng/mL, and/or stage ≥ T3a [2]. Patient stratification into risk groups is a widely used approach for assessing the prognosis of PCa. However, in some cases, this method may not be sufficiently informative, potentially leading to incorrect conclusions and inappropriate treatments. One of the limitations of patient stratification into such risk groups is that this method does not consider many other factors that may affect the disease prognosis, such as age, comorbidities, and genetics. Studies have shown that even patients with low D'Amico risk may have a poor prognosis if they have other risk factors that were not considered during stratification [3]. Thus, stratification of patients into risk groups is a useful tool for assessing PCa prognosis, but other risk factors and individual patient characteristics must also be considered. One of the main factors complicating the diagnosis and treatment of PCa, as well as of other types of cancer, is molecular heterogeneity. This phenomenon is defined by differences in the molecular properties of tumor cells, such as changes in gene expression, mutations, and other factors that affect cell function and behavior [4,5]. Furthermore, the development of aggressive forms of PCa requires only a few driver alterations [6][7][8]. Molecular heterogeneity is also present within LPCa, and this can lead to different prognosis and outcomes [4,5]. It has been shown that 74% of all PCa cases can be attributed to one of a range of molecular subtypes that have been identified on the basis of analysis of somatic mutations, changes in copy number, gene expression, gene fusions, and DNA methylation [9]. There are currently seven major molecular subtypes of PCa that have been identified as part of the Prostate Adenocarcinoma Project of The Cancer Genome Atlas (TCGA) consortium [9]. Four of the seven subtypes are characterized by the presence of fusion transcripts between the TMPRSS2 gene exons and the exons of the ETS family of genes (the erythroblast transformation-specific family of transcription factors): ERG, ETV1, ETV4, and FLI1 (the frequency of occurrence of these subtypes is 46%, 8%, 4%, and 1%, respectively). Three other subtypes are characterized by the presence of point mutations in one of the following genes: SPOP, FOXA1, or IDH1 (the frequency of occurrence of these subtypes is 11%, 3%, and 1%, respectively) [9,10]. Thus, about half of all cases of prostate cancer have a fused TMPRSS2-ERG transcript, which is formed due to an intrachromosomal rearrangement leading to the fusion of two genes: TMPRSS2 and ERG. Considering the molecular heterogeneity of PCa in particular, the selection of the group of tumors characterized by the TMPRSS2-ERG fusion transcript allows researchers to focus on specific molecular subtypes of PCa and, therefore, form a more homogeneous group of PCa cases. To improve prognoses and to determine the best treatment approach for each patient with LPCa, a more detailed study of tumor molecular heterogeneity is necessary. New biomarkers and genomic technologies can help in this direction and allow for the identification of the biological nature of any given PCa and its prognosis [11]. This study involved RNA-Seq profiling of 58 LPCa samples from intermediate-and high-risk groups and 43 locally advanced PCa (LAPCa) samples from a cohort of Russian patients, with the aim of identifying transcriptional profile differences between these groups, taking into account the TMPRSS2-ERG subtype, and searching for promising genes as prognostic markers. Differentially Expressed Genes among Risk Groups within Localized PCa LPCa can be classified into one of three risk groups: low, intermediate, and high. Based on the LPCa cases we analyzed, the low-risk group is quite rare (about 2%), whereas the intermediate one is the most frequent (80-82%), and the high-risk group accounts for 16-18%. Initially, we compared primary PCa tumors belonging to the intermediate-(n = 47) and high-risk (n = 10) groups. Based on RNA-Seq data obtained for PCas in Russian patients having tumors of the high and intermediate LPCa risk groups, just six (↓COL17A1, ↑FAM83D, ↑GCSAML, ↓IER2, ↑IFI44L, ↓MYH3) differentially expressed (DE) genes were identified (p-value ≤ 0.05 according to the QLF and MW tests, Log2CPM ≥ 3, |Log2FC| ≥ 1, Supplementary Table S1-1). This is definitely too small a number of DE genes; apparently, these risk groups do not have extensive transcriptomic variations among themselves. Differentially Expressed Genes between LAPCa and LPCa for High-Risk Group At this stage of our work, besides the LPCa samples, we included LAPCa cases in the study, in order to find transcriptome differences between these grades of tumor extension. Including the expression data of LPCa cases, as well as the data obtained in our previous work devoted to LAPCa samples, we performed differential expression analysis (DEA) between the groups of LPCa (n = 10) and LAPCa (n = 43) samples classified as high-risk. As a result, 243 DE genes were identified (p-values ≤ 0.05 according to the QLF and MW tests, Log2CPM ≥ 3, |Log2FC| ≥ 1, Supplementary Table S1-2). Figure 1 shows the expression profiles of the top 50 DE genes. Based on the identified DE gene profile, we further analyzed the enrichment of biological pathways associated with LAPCa versus LPCa within the high-risk group. As a result of the analysis based on the GSEA algorithm and the KEGG Human 2021 database, we found statistically significant changes in 68 biological processes (FDR ≤ 0.05, Figure 2, Supplementary Table S2-1). Based on the identified DE gene profile, we further analyzed the enrichment of biological pathways associated with LAPCa versus LPCa within the high-risk group. As a result of the analysis based on the GSEA algorithm and the KEGG Human 2021 database, we found statistically significant changes in 68 biological processes (FDR ≤ 0.05, Figure The identified biological pathways include many processes with known involvement in the development and progression of various types of tumors, including "Non-small cell lung cancer", "Thyroid cancer", and "Melanoma", as well as the effects of "Proteoglycans in cancer" and the "Ras signaling pathway". A complete list of analysis results is presented in Supplementary Table S2-1. Differentially Expressed Genes between LAPCa and LPCa within the TMPRSS2-ERG Molecular Subtype It is known that the frequency of occurrence of the TMPRSS2-ERG subtype varies from 40% to 50% in PCa [9,10]. In addition, most researchers consider TMPRSS2-ERG as a factor involved in increased aggressiveness, propensity for invasion, and metastasis [12][13][14]. However, several studies have demonstrated that TMPRSS2-ERG is either a precursor of a good prognosis or has no association with progression and prognosis at all [15,16]. In the present study, the incidence of the TMPRSS2-ERG subtype was about 42% for LAPCa The identified biological pathways include many processes with known involvement in the development and progression of various types of tumors, including "Non-small cell lung cancer", "Thyroid cancer", and "Melanoma", as well as the effects of "Proteoglycans in cancer" and the "Ras signaling pathway". A complete list of analysis results is presented in Supplementary Table S2-1. Differentially Expressed Genes between LAPCa and LPCa within the TMPRSS2-ERG Molecular Subtype It is known that the frequency of occurrence of the TMPRSS2-ERG subtype varies from 40% to 50% in PCa [9,10]. In addition, most researchers consider TMPRSS2-ERG as a factor involved in increased aggressiveness, propensity for invasion, and metastasis [12][13][14]. However, several studies have demonstrated that TMPRSS2-ERG is either a precursor of a good prognosis or has no association with progression and prognosis at all [15,16]. In the present study, the incidence of the TMPRSS2-ERG subtype was about 42% for LAPCa and 28% for LPCa. Furthermore, we searched for associations of the TMPRSS2-ERG subtype with clinical and pathomorphological criteria. Spearman's correlation analysis did not show a significant association of the TMPRSS2-ERG subtype with any of the following criteria: age, tumor extension groups (LAPCa, LPCa), risk groups, Gleason Score, ISUP, pT, or preoperative PSA value. Insomuch as the TMPRSS2-ERG subtype results in a more homogeneous group, we performed DEA between the groups of LPCa (n = 16) and LAPCa (n = 18) samples only within the TMPRSS2-ERG subtype. As a result, 207 DE genes were identified (p-values ≤ 0.05 according to the QLF and MW tests, Log2CPM ≥ 3, |Log2FC| ≥ 1, Supplementary Table S1-3). Figure 3 shows the expression profiles of the top 50 of these DE genes. and 28% for LPCa. Furthermore, we searched for associations of the TMPRSS2-ERG subtype with clinical and pathomorphological criteria. Spearman's correlation analysis did not show a significant association of the TMPRSS2-ERG subtype with any of the following criteria: age, tumor extension groups (LAPCa, LPCa), risk groups, Gleason Score, ISUP, pT, or preoperative PSA value. Insomuch as the TMPRSS2-ERG subtype results in a more homogeneous group, we performed DEA between the groups of LPCa (n = 16) and LAPCa (n = 18) samples only within the TMPRSS2-ERG subtype. As a result, 207 DE genes were identified (p-values ≤ 0.05 according to the QLF and MW tests, Log2CPM ≥ 3, |Log2FC| ≥ 1, Supplementary Table S1-3). Figure 3 shows the expression profiles of the top 50 of these DE genes. Based on the identified DE genes, we also performed an enrichment analysis of a number of biological pathways. As a result of the analysis, we found statistically significant changes in 16 biological processes (FDR ≤ 0.05, Figure 4, Supplementary Table S2-2). After clarifying the TMPRSS2-ERG molecular subtype in the LAPCa category, various signaling processes that have a known involvement in PCa progression, such as the "TGF-beta signaling pathway", were seen to become the most significant. A complete list of analysis results is presented in Supplementary Table S2-2. We also identified DE genes whose expression was statistically significantly associated with the LAPCa group within the TMPRSS2-ERG subtype: BHLHA15, CIBAR1, CLIC4, CORO1B, CRB3, DNAJB4, DNM3OS, EEF1A1P5, HECTD2, ID4, MFSD3, MIR222HG, OGN, RPLP0P6, SH3BGRL, and ZNF483. The results of the differential expression of these genes are presented in Table 1. Based on the identified DE genes, we also performed an enrichment analysis of a number of biological pathways. As a result of the analysis, we found statistically significant changes in 16 biological processes (FDR ≤ 0.05, Figure 4, Supplementary Table S2-2). After clarifying the TMPRSS2-ERG molecular subtype in the LAPCa category, various signaling processes that have a known involvement in PCa progression, such as the "TGFbeta signaling pathway", were seen to become the most significant. A complete list of analysis results is presented in Supplementary Table S2-2. We also identified DE genes whose expression was statistically significantly associated with the LAPCa group within the TMPRSS2-ERG subtype: BHLHA15, CIBAR1, CLIC4, CORO1B, CRB3, DNAJB4, DNM3OS, EEF1A1P5, HECTD2, ID4, MFSD3, MIR222HG, OGN, RPLP0P6, SH3BGRL, and ZNF483. The results of the differential expression of these genes are presented in Table 1. It should be noted that, according to the differential expression of the CIBAR1, CLIC4, DNAJB4, and EEF1A1P5 genes within the TMPRSS2-ERG molecular subtype, we observed the highest correlation of these with the LAPCa group. We also considered the predictive potential of selected genes by analyzing ROC curves based on a logistic regression algorithm. The results of the analysis are presented in Table 2. According to the results obtained, the EEF1A1P5 and RPLP0P6 genes had the highest values (AUC > 0.9) of the AUC metric in the test data, both when analyzed in the total set of samples and within the TMPRSS2-ERG molecular subtype. It is also worth noting that an increase in value of more than 0.9 for the AUC metrics after clarification of the molecular subtype was found in the CIBAR1, CLIC4, HECTD2, OGN, and ZNF483 genes. Differentially Expressed Genes Associated with ISUP 3 at Intermediate Risk for Localized PCa Based on the RNA-Seq data obtained for ISUP = 3 (n = 17) and ISUP = 2 (n = 10) LPCa in our Russian patient cohort within the GS = 7, 36 differentially expressed (DE) genes were identified (p-values ≤ 0.05 according to the QLF and MW tests, Log2CPM ≥ 3, |Log2FC| ≥ 1, Supplementary Table S1-4). Figure 5 shows the expression profiles of these DE genes. Based on the results of our analysis of the enrichment of biological pathways associated with the ISUP 3 group in the LPCa of the intermediate-risk group, we found statistically significant changes in only one cancer-associated biological process, the "PPAR signaling pathway" (NES = 1.93; FDR = 0.02, Figure 6). A complete list of analysis results is presented in Supplementary Table S2-3. Based on the results of our analysis of the enrichment of biological pathways associated with the ISUP 3 group in the LPCa of the intermediate-risk group, we found statistically significant changes in only one cancer-associated biological process, the "PPAR signaling pathway" (NES = 1.93; FDR = 0.02, Figure 6). A complete list of analysis results is presented in Supplementary Table S2-3. Validation of the Relative Expression of Genes Associated with the ISUP 3 Group in the Intermediate-Risk LPCa Group of Russian Patients We also carried out selection and validation of the relative expression of promising genes associated with the ISUP 3 group in our cohort of patients, in order that they might later be considered as additional prognostic markers. Based on the most significant differ- Validation of the Relative Expression of Genes Associated with the ISUP 3 Group in the Intermediate-Risk LPCa Group of Russian Patients We also carried out selection and validation of the relative expression of promising genes associated with the ISUP 3 group in our cohort of patients, in order that they might later be considered as additional prognostic markers. Based on the most significant differential expression results, the LPL, MYC, and TWIST1 genes were selected for validation by qPCR. According to the results of this validation of the relative expressions of the genes under consideration, statistically significant results were confirmed (Figure 7). Discussion In the current work, we performed a comprehensive study of LPCa, considering the risk groups, the degree of differentiation of the tumor cells (ISUP classification), and inclusion in the TMPRSS2-ERG molecular subtype based on RNA-Seq profiling. As our results demonstrate, only the PCa cases belonging to high-and intermediaterisk groups have insufficiently stable transcriptomic variations between themselves. We found changes in only six genes (↓COL17A1, ↑FAM83D, ↑GCSAML, ↓IER2, ↑IFI44L, ↓MYH3), but little is known about their involvement in PCa. However, when considering two grades of tumor extension (LPCa and LAPCa), we were able to find significant differences in gene expression. It is necessary to mention that when considering the TMPRSS2-ERG molecular subtype (i.e., when only TMPRSS2-ERG-positive cases are included in the analysis), the significance of the changes in the expression of the previously detected genes (CIBAR1, CLIC4, EEF1A1P5, OGN, RPLP0P6, and ZNF483) increased. The CIBAR1, CLIC4, OGN, and ZNF483 are protein-coding genes, but nothing is known about their association with PCa and cancer in general. EEF1A1P5 and RPLP0P6 are pseudogenes, and there are currently no data on their association with PCa progression. However, there is evidence that EEF1A1P5 gene transcripts and the RPLP0P6 protein are present in exosomes of various tumor cell lines [17,18]. Regarding the HECTD2 gene in PCa, it has been shown that a decrease in the expression of this gene significantly affects androgen-induced and AR-mediated transcription, while suppression of HECTD2 also enhances the growth of LNCaP cells [19]. According to our data, we also observed a decrease in the expression of the HECTD2 gene in the more advanced stage of the high-risk group-LAPCa. When considering the most significantly enhanced cancer-associated biological pro- Discussion In the current work, we performed a comprehensive study of LPCa, considering the risk groups, the degree of differentiation of the tumor cells (ISUP classification), and inclusion in the TMPRSS2-ERG molecular subtype based on RNA-Seq profiling. As our results demonstrate, only the PCa cases belonging to high-and intermediaterisk groups have insufficiently stable transcriptomic variations between themselves. We found changes in only six genes (↓COL17A1, ↑FAM83D, ↑GCSAML, ↓IER2, ↑IFI44L, ↓MYH3), but little is known about their involvement in PCa. However, when considering two grades of tumor extension (LPCa and LAPCa), we were able to find significant differences in gene expression. It is necessary to mention that when considering the TMPRSS2-ERG molecular subtype (i.e., when only TMPRSS2-ERG-positive cases are included in the analysis), the significance of the changes in the expression of the previously detected genes (CIBAR1, CLIC4, EEF1A1P5, OGN, RPLP0P6, and ZNF483) increased. The CIBAR1, CLIC4, OGN, and ZNF483 are protein-coding genes, but nothing is known about their association with PCa and cancer in general. EEF1A1P5 and RPLP0P6 are pseudogenes, and there are currently no data on their association with PCa progression. However, there is evidence that EEF1A1P5 gene transcripts and the RPLP0P6 protein are present in exosomes of various tumor cell lines [17,18]. Regarding the HECTD2 gene in PCa, it has been shown that a decrease in the expression of this gene significantly affects androgen-induced and AR-mediated transcription, while suppression of HECTD2 also enhances the growth of LNCaP cells [19]. According to our data, we also observed a decrease in the expression of the HECTD2 gene in the more advanced stage of the high-risk group-LAPCa. When considering the most significantly enhanced cancer-associated biological processes in the LAPCa category within the TMPRSS2-ERG molecular subtype, such signaling pathways as the "cAMP signaling pathway" and the "TGF-beta signaling pathway" come to the fore. Aberrant signaling in these pathways has been implicated in various types of tumor. Transforming growth factor β (TGF-β) is a key regulator of many biological processes, including metastasis and invasion. This protein binds to intracellular receptors and activates a signaling pathway that is involved in the regulation of cell proliferation and differentiation. Multiple studies have shown that such TGF-β signaling is associated with poor prognosis in PCa [20][21][22]. cAMP signaling can play both a tumor-suppressing and tumor-promoting role, depending on the type of tumor. This cascade can also be used to regulate the growth, migration, invasion, and metabolism of cancer cells, including those in PCa [23,24]. Thus, our results indicate that the identified signaling pathways play an important role in PCa and can potentially be used to assess likely prognoses. Further research is needed to better understand the mechanisms of action of these signaling pathways at different stages of PCa progression and their potential value in developing more effective treatments. Our study also considered another clinical problem-that based on the molecular heterogeneity of prostate tumors. According to the D'Amico classification for LPCa, patients with PSA from 10 to 20 ng/mL, a Gleason score (GS) of 7 (ISUP 2/3), and cT2b belong to the intermediate-risk group. The standard of care for patients in this group is radical prostatectomy with/without extended pelvic lymphadenectomy or external beam radiation therapy. This group of patients is one of the most heterogeneous, including both patients with GS (3+4) and GS (4+3), as well as a variety of PSA levels. The main difficulty in the treatment of patients in this risk group is the high probability of disease progression after radical treatment. The recommendations of the American Association of Urology (2017) propose a division of this intermediate-risk group into favorable (GS 3+4, risk group 2 (ISUP 2)) and unfavorable (GS 4+3, risk group 3 (ISUP 3)). Unfortunately, this division does not allow a qualitative change in the approach to the treatment of patients in this category, and the main approach remains radical prostatectomy or radiation therapy with androgen deprivation therapy. We investigated the features of the transcriptome profile in our LPCa patients in the intermediate-risk group, based on the results of differential gene expression between the ISUP 2 and 3 groups. From our analysis of biological pathway enrichment, it was found that the ISUP 3 group was characterized by a statistically significant increase in the regulation of the PPAR signaling pathway (NES = 1.93; FDR = 0.015). This signaling cascade is one of the most important mechanisms for regulating lipid and glucose metabolism, as well as cell growth and differentiation. The main participants in this pathway, according to our sample of patients, are the genes ADIPOQ (LogFC = 6.92; MW p-value = 0.04), FABP4 (LogFC = 4.72; MW p-value = 0.03), and LPL (LogFC = 3.45; MW p-value = 0.01). Based on the identified profiles of the DE genes, we selected the most significant of them in terms of statistical and expression metrics for subsequent qPCR validation: LPL, MYC, and TWIST1. Statistically significant differences between the ISUP 2 and ISUP 3 groups were shown, based on the relative expression of all the selected genes (p-value ≤ 0.05). The TWIST1 and MYC genes are well-known and important regulators of cancerassociated processes and remain objects of active research in the field of oncology. Various studies have shown a relationship between high levels of expression of these genes and the aggressiveness of oncological diseases, including PCa [25][26][27][28]. Based on our results with Russian patients, we have also demonstrated that increased expression of the TWIST1 and MYC genes is associated with an unfavorable intermediate risk in LPCa. However, the LPL gene, which encodes the enzyme lipoprotein lipase and plays a key role in the metabolism of fats and carbohydrates, is of particular interest [29]. LPL is an important enzyme that is secreted by extracellular lipolysis and can potentially be supplied by tumor cells or adjacent adipose tissue cells into the tumor microenvironment [30]. The main function of this enzyme is to hydrolyze triglycerides, resulting in the release of fatty acids, essential building blocks of biological membranes, so such dysregulation of fatty acid metabolism is a vital component of lipid metabolism reprogramming in cancer. Tumor cells can use circulating free fatty acids as an energy source through lipolysis, for membrane biosynthesis or in signaling processes [31]. There is also experimental evidence that lipogenesis in tumors, associated with increased expression of the fatty acid synthase gene (FASN), is strongly dependent on the activity and/or expression of important oncogenes and tumor suppressors, including MYC, which cooperates with the sterol regulatory element-binding proteins (SREBPs) and can induce in vitro and in vivo lipogenesis, thus playing an important role in initiating and maintaining oncogenic growth [32]. Materials The PCa samples were obtained from Russian patients who had undergone surgical intervention in the P.A. Hertzen Moscow Oncology Research Center (a branch of the National Medical Research Radiological Center, Ministry of Health of the Russian Federation) between 2015 and 2020. All materials were collected and characterized by the organization's Pathology Department according to the WHO Classification of Tumours of the Urinary System and Male Genital Organs [33]. Each sample contained a minimum of 70% of tumor cells. Following surgical resection, the tissue samples were immediately frozen and stored in liquid nitrogen. In the current study, we used 58 samples of LPCa (adenocarcinoma) samples obtained from patients who underwent surgical treatment but had not received neoadjuvant therapy (Table 3). Additionally, RNA-Seq data of 43 lymph node-negative LAPCa samples obtained in our previous study were included [34]. The samples have the following characteristics: no regional metastasis (N0 category); negative resection margins; any PSA value; and any Gleason score. Samples with the presence of regional metastasis (pN1) were not included in the study as they are characterized by a specific transcriptomic expression pattern. Table 3. Clinical and pathological characteristics of the studied cohort. pT-primary tumor estimation; N-regional lymph nodes; M-distant metastases; ISUP-The International Society of Urological Pathology. Criterion LPCa, n LAPCa, n Samples of frozen tumor tissues were first homogenized using a MagNA Lyser device (Roche, Basel, Switzerland). Subsequent total RNA isolation was performed using the MagNA Pure Compact RNA Kit (Roche) on the MagNA Pure Compact System (Roche) according to the manufacturer's protocol. The concentration of isolated total RNA was assessed on a Qubit 4.0 fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA) using the Qubit RNA BR Assay Kit (Thermo Fisher Scientific). The RIN (RNA integrity number) parameter, which characterizes the integrity of RNA, was evaluated using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The RIN for all samples studied was no less than 7. Sample preparation of mRNA libraries was performed using the TruSeq Stranded mRNA Kit (Illumina, San Diego, CA, USA) as described previously [35]. The size of the resulting mRNA library was~260 bp. High-throughput sequencing of mRNA libraries was performed on a NextSeq 500 System (Illumina) using NextSeq 500/550 High Output Kit v2. 5 [39], as described previously [34]. The results were considered significant at p-values of the quasilikelihood F-test (QLF) and the Mann-Whitney U-test (MW) ≤ 0.05. Gene set enrichment analysis (GSEA) was performed using the GSEApy package in Jupyter Notebook, Python (v.3.6) [40]. Annotation of the results was obtained, based on the KEGG Human 2021. The results were considered significant at FDR ≤ 0.05. ROC analysis was performed on the basis of the Logistic Regression algorithm, using the scikit-learn library in Jupyter Notebook, Python (v.3.6). Reverse Transcription and Quantitative PCR (qPCR) cDNA samples were obtained from the mRNA template using Mint reverse transcriptase and oligo(dT) primer (20 µM) according to the manufacturer's protocol (Evrogen, Moscow, Russia). qPCR was performed in three technical replicates in total reaction volume of 10 µL on an Applied Biosystems 7500 instrument (Thermo Fisher Scientific). The TaqMan Gene Expression Assay Hs03063375_ft (Thermo Fisher Scientific) was used to determine the presence of the TMPRSS2-ERG fusion transcript. ROX was used as a reference dye. The PUM1 gene was used as a reference for analysis of relative mRNA expression. The sequences of primers used to validate markers based on mRNA expression are shown in Table 4. The following process was used for amplification: 95 • C for 15 min; 40 cycles at 95 • C for 15 s; and 60 • C for 60 s. To assess the level of expression, the method of relative measurements (∆CT) was used and calculations were performed using the ATG program (Analysis of Transcription of Genes) [41]. Visualization and statistical analysis of expression results were performed using the MW test in the R environment (v.3.6.3, Vienna, Austria). Conclusions We performed RNA-Seq profiling of 58 LPCa and 43 LAPCa tissue samples, obtained as a result of radical prostatectomy on a cohort of Russian patients. Bioinformatics analysis revealed that in the high-risk group, LAPCs showed enrichment of certain biological pathways, both across the entire sample and within the TMPRSS2-ERG molecular subtype. Such enrichment could be further investigated in the search for new potential therapeutic targets in the studied categories of PCa. We also determined which genes showed the greatest significant differences in expression, allowing the LPCa and LAPCa categories in the high-risk group to be distinguished, when taking into account the TMPRSS2-ERG molecular subtype. The highest predictive potential was found for the CIBAR1, CLIC4, EEF1A1P5, OGN, RPLP0P6, and ZNF483 genes. The study also examined the transcriptomic features of the intermediate-risk group of LPCa, within GS = 7 (ISUP classification-groups 2 and 3). A statistically significant enrichment of the "PPAR signaling pathway" in the ISUP 3 group was shown. Based on the identified transcriptomic profile, the LPL, MYC, and TWIST1 genes were selected as promising additional prognostic markers, the statistical significance of which was confirmed based on qPCR validation. Institutional Review Board Statement: Ethical review and approval were waived, as the current research was a retrospective study using medical records/biological specimens obtained in previous clinical practices. Informed Consent Statement: Patient consent was waived due to the study being retrospective and its use of pseudonymized data. Data Availability Statement: All data generated or analyzed during this study are available (GSE229904). The dataset also includes expression data from our previous studies devoted to the analysis of primary PCa tumors.
2023-05-28T15:04:34.723Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "44cbf929ce647e68e83782c7f33a8a0931791121", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24119282", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4905ef40ba201df7802c8800af39567e4431c29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55734452
pes2o/s2orc
v3-fos-license
Standardization and Evaluation of Probiotic Shrikhand Probiotics are viable microbial dietary supplements that, when introduced in sufficient quantities, positively influence the health mainly by improving the composition of intestinal microbiota. The World Health Organization (2001) defined probiotics as "live microorganisms which, when administered in adequate amounts, confer a health benefit on the host”. Probiotics can survive better in dairy products than in nondairy foods. Most of the probiotics can readily utilize lactose as an energy source for growth. Milk is one of the most important natural products consumed by people all around the world in one form or another. During fermentation the microorganisms convert lactose into lactic acid and its metabolites which confers improved digestibility and nutrition. Introduction Probiotics are viable microbial dietary supplements that, when introduced in sufficient quantities, positively influence the health mainly by improving the composition of intestinal microbiota. The World Health Organization (2001) defined probiotics as "live microorganisms which, when administered in adequate amounts, confer a health benefit on the host". Probiotics can survive better in dairy products than in nondairy foods. Most of the probiotics can readily utilize lactose as an energy source for growth. Milk is one of the most important natural products consumed by people all around the world in one form or another. During fermentation the microorganisms convert lactose into lactic acid and its metabolites which confers improved digestibility and nutrition. The technology of application of probiotic organisms in fermented dairy products in general, aims to combine the nutritional value of milk and the health benefits of the bacteria with their ability to grow in milk, resulting in a nutritionally healthy and desirable product for the consumers. For the commercial production of probiotic enriched fermented ISSN: 2319-7706 Volume 6 Number 11 (2017) pp. 41-47 Journal homepage: http://www.ijcmas.com In India, among the different indigenous fermented milk products "Shrikhand" assumes special importance for its sensory attributes. The present investigation was envisioned to standardize and evaluate probiotic shrikhand using single (Lactobacillus acidophilus) and mixed probiotic strains (Lactobacillus acidophilus and Lactobacillus rhamnosus). The chemical constituents such as moisture, total solid content, protein, fat, ash, pH, acidity, reducing and total sugars of the probiotic shrikhand were analyzed. The viable probiotic count of shrikhand was recorded as 25×10 12 cfu g -1 for L. acidophilus and 30×10 12 cfu g -1 for mixed probiotic strains in fresh shrikhand. During storage, the viability of probiotics was decreased due to increase in acidity i.e. L. acidophilus and mixed probiotic stains had the viable probiotic count of 20×10 12 cfu g -1 . The moisture and pH were noticed to decrease during storage whereas acidity and total solid content were increased on storage. Storage study of probiotic shrikhand had good storage stability during 30 days of storage at refrigeration temperature (4°C). Probiotic shrikhand could thus serve as good carrier of probiotics to improve gut health and also suitable for lactose intolerant individuals. milk products, it is important to select suitable dairy starter cultures. These starter cultures are carefully selected microorganisms, which are deliberately added to milk to initiate and carry out desired fermentation under controlled conditions in the production of fermented milk products. Most of them belong to lactic acid bacteria (Lactococcus, Lactobacillus, Streptococcus and Leuconostoc). Starter cultures can be used as single strain, mixed strain and as multiple strains depending upon the type of products to be prepared. The probiotic strain in use should be resistant to stomach acidity, pancreatic and bile secretion. Fermented milk products are generally sour milk products prepared by fermenting milk by means of specific dairy starter cultures. The important fermented milks in India are dahi, shrikhand, lassi, butter milk, yoghurt etc., It is estimated that approximately eight per cent of milk produced in the country is converted into fermented milk (Aneja et al., 2002). Consumption of fermented milk provides lot of health benefits beyond basic nutrition. Fermented milk has become increasingly popular in recent times on account of being important sources of probiotics in our diet. Shrikhand is one of the fermented milk delicacies and is derived from the Sanskrit word "Shrikharini", meaning a curd preparation with added sugar, flavouring material, fruits and nuts. Shrikhand is a semisolid, sweetish sour, wholesome and indigenous fermented milk product popular in Maharashtra and Karnataka (Desai and Gupta, 1986). Shrikhand, obtained from curd (dahi), contains most of the valuable constituents of milk such as protein, fat, minerals, fat soluble vitamins and an appreciable amount of Bcomplex vitamins, particularly riboflavin and folic acid. On account of the nutritional importance, the present study was designed to develop the technology for probiotic shrikhand, by optimizing the appropriate starter culture, level of inoculum and incubation period. Under this background, the present study was carried out with the following objectives. To standardise probiotic shrikhand using selected probiotics. To evaluate the quality characteristics of the probiotic shrikhand. To study the shelf life of probiotic shrikhand. Microbial cultures and other ingredients The starter culture Lactobacillus acidophilus (NCDC 14) was obtained from the National Collection of Dairy Cultures (NCDC), National Dairy Research Institute (NDRI), Karnal. The mixed probiotic strains, Lactobacillus acidophilus and Lactobacillus rhamnosus were obtained from Darolac, Mumbai. Milk and other ingredients were purchased from the local market of Madurai. Preparation of starter culture The freeze dried LAB cultures were revived in MRS broth by incubating at 40±2°C for 24-48 h. The revived cultures were re-inoculated in MRS broth and incubated for 12-16 h at 40±1°C. Then 1 ml of the cultures were transferred into 10 ml of sterile 12 per cent Reconstituted Skim Milk (RSM) and incubated for 12-16 h at 40±1°C. Preparation of yogurt and chakka Toned milk (3.0% fat, 8.5% SNF) was boiled at 95°C for 7 min and cooled to the temperature of around 40°C. After cooling, the probiotic cultures, Lactobacillus acidophilus/ Lactobacillus acidophilus + Lactobacillus rhamnosus were added in the milk @ 2 per cent inoculum and incubated at 40°C for 15 hours. After incubation, the yogurt was drained in muslin cloth for 13 hours to get chakka in refrigeration temperature. Preparation of plain shrikhand Powdered sugar (40%) and powdered cardamom (1.2%) were added to the chakka and kneaded well by using electric blender. Storage studies The probiotic shrikhand was packed in polystyrene cups and stored under refrigeration condition (4°C). The packed probiotic enriched shrikhand samples conforming to the different treatments (T s and T m ) were studied for storage stability during a storage period of 30 days under refrigeration temperature (4°C). The chemical, microbial and sensory analysis of the stored samples were analysed using standard procedure as mentioned by AOAC at regular intervals of 10 days during the period of storage (30 days). Viability of probiotic bacteria in shrikhand Viability of probiotic bacteria was determined during the storage by serial dilution method at 10 days intervals. One gram of probiotic shrikhand samples were weighed and serially diluted up to 10 14 dilutions. The probiotic bacterial population was enumerated by pour plate technique using MRS media after incubation at 40°C for 48 h. The results were expressed as log cfu ml -1 . Microbial quality of shrikhand The microbial quality of shrikhand during storage was enumerated by serial dilution method as described by Istavankiss (1984). Yeast and mold count and coliform count for probiotic shrikhand were assessed on 0, 10, 20 and 30 days of storage. Dilution of 10 -3 and 10 -4 was used for yeast and mold count and 10 -1 and 10 -2 was used for coliforms. It was then pour plated using yeast extract malt extract agar and violet red bile agar respectively and then incubates at room temperature for 3-5 days for yeast and mold and 1-2 days for coliforms. Organoleptic evaluation Organoleptic evaluation of the sample was done by 20 semi trained judges at regular intervals of 10 days during 30 days of storage study using nine point hedonic rating scale to grade probiotic shrikhand with the scores ranging from like extremely (9.0) to dislike extremely (1.0) (Amerine et al., 1965). Statistical Analysis The data obtained were subjected to statistical analysis to determine the impact of treatments, storage periods and their interaction on the quality of probiotic shrikhand. Factorial Completely Randomized Design (FCRD) was applied for the statistical analysis (Rangaswamy, 1995). Moisture and total solid content The moisture content for the shrikhand developed from the mixed probiotic strains (T m ) was 43.33 per cent compared to 45.12 per cent in single culture L. acidophilus (T s ). The decrease in moisture content was observed in both the samples i.e. T s (43.27%) and T m (41.60%) during storage. In congruence to the decrease in moisture content as shown in Table 1, there was a corresponding increase in total solid content in all the shrikhand samples. The total solid content of the shrikhand samples, carrying the single culture (T s ) was 54.88 per cent and mixed probiotic strains enriched shrikhand (T m )was 56.62 per cent. The increase in total solid content during storage may be augmented to the decrease in moisture content on storage. From the results, it was revealed that mixed culture (L. acidophilus and L. rhamnosus) enriched shrikhand had more total solid content than single culture (L. acidophilus) enriched shrikhand. In probiotic shrikhand, significant difference was noticed for changes in moisture content among the different treatments, period of storage and their interaction (Tables 2 and 3). Protein and fat The protein and fat content for the shrikhand developed from the mixed probiotic strains (T m ) were 6.80 and 10.45 g/100g respectively whereas in single culture L. acidophilus (T s ) it was 6.75 and 6.80 g/100g respectively. While analyzing the changes in the protein content of the shrikhand samples conforming to the different treatments during storage a slight increase in protein content was noticed in all the shrikhand samples. A slight reduction in fat content was noticed in all the shrikhand samples (Table 1). Statistical analysis recorded significant difference in terms of protein content among the different treatments and also in terms of storage period. Ash The ash content of shrikhand samples (T s and T m ) were 0.60 per cent. No significant change was noticed in the ash content levels of the shrikhand samples during storage of 30 days. Kumar et al., (2011) reported that the mean ash percentage values of shrikhand ranged from 0.25 to 0.59 per cent. There was no change in ash content during the entire storage period under refrigeration temperature. Nigam et al., (2009) also reported that the storage period did not affect the ash content of the prepared shrikhand. Similar results were obtained from the present study. pH and acidity The pH and acidity values of probiotic shrikhand samples (T m ) were 4.07 and 1.50 per cent whereas compared to 4.00 and 1.44 per cent in T s samples. A significant decrease in pH was noticed during storage in all the samples which may be augmented to the increase in acidity during storage (Table 1). From the results it was concluded that the mixed culture (L. acidophilus and L. rhamnosus) had more acidity when compared to single culture (L. acidophilus). Acidity was increased in all experimental samples during storage. Because, during post-acidification period, the activity of probiotic cultures were not completely stopped, which may have produced lactic acid up to the availability of the nutrients present in the yoghurt. Similar results were reported by Manjula et al., (2012). Reducing sugars and total sugars The reducing and total sugar content of the samples was 2.20 and 45.64g / 100g respectively in T m whereas in T s the corresponding values were 2.30 and 48.50 g / 100g respectively. From the Table 1, it can be inferred that there was an increasing trend of reducing sugar in all probiotic shrikhand samples. The increase in reducing sugar levels during storage of the shrikhand samples was attributed to the breakdown of total sugar into simple sugars. During storage, the total sugar was found to be decreased which might be due to the breakdown of carbohydrates (Raghuwanshi et al., 2011). Yeast and mould count 10 1 10 2 10 1 10 2 10 1 10 2 10 1 10 2 --------T s -Shrikhand enriched with single culture (L. acidophilus) T m -Shrikhand enriched with mixed culture (L. acidophilus and L. rhamnosus) Fig.1 Viability of probiotics during storage Raghuwanshi et al., (2011) reported that fresh shrikhand contained on an average 2.96 per cent reducing sugar which was increased to 3.68 per cent in 5 days of storage irrespective of the storage temperatures. Significant increase in reducing sugar content was observed with increase in storage period which are in line with the results of the present study. Sensory attributes The sensory attributes such as colour, flavour, texture, taste and overall acceptability was acceptable in both the samples. But the single culture enriched probiotic recorded high sensory score (9.00) when compared to the mixed probiotic strains enriched shrikhand which had less score (8.75). During storage, the acceptability level was slightly decreased and score value was in the range of 8.00 to 8.25 (Table 4) and both the products were found to be acceptable during the entire storage period (30 days). Microbial quality Viability The viability count of L. acidophilus enriched shrikhand was 25×10 12 cfu g -1 and 21×10 13 cfu g -1 whereas in mixed probiotic strains enriched shrikhand, the viability count was 30×10 12 cfu g -1 and 27×10 13 cfu g -1 (Table 5). During storage, the viability of probiotics was decreased ( Fig. 1) due to increase in acidity. During storage, it was absorbed that there existed good viability of probiotic count which may be due to the suitable pH (3.80 to 4) condition for the growth of probiotics. Tungrugsasut et al., (2012) reported that the initial count of probiotics in probiotic yoghurt in 3 per cent was 126x10 8 cfu/ml and that of 4 per cent was 129x10 8 cfu/ml. The counts of probiotic in the two formulae were decreased gradually after storage. After 30 days of storage, the counts of probiotic in 3 per cent and 4 per cent probiotic yogurt were 61x10 8 cfu ml -1 and 64x10 8 cfu ml -1 respectively. Yeast and mould count The yeast and mould growth in probiotic enriched shrikhand was analysed at the regular intervals viz., 10 th , 20 th and 30 th days of storage. Initially there was no yeast and mould growth (Table 3) in probiotic enriched shrikhand in the dilution factors 10 1 to 10 4 . During the storage of probiotic enriched shrikhand also, there was no growth in the samples (T s and T m ). Lakshmi et al., (2013) also observed no yeast and mold count upto 7 th day of storage of shrikhand, and after which a gradual increase of yeast and mold were observed up to 9 cfu/ g counts on the 25th Initial 10 12 Final 10 12 Initial 10 13 Final 10 13 day of storage. Coliform count was found to be nil thorough out the storage period. From the study, it could be inferred that the probiotic shrikhand using L. acidophilus was highly acceptable than shrikhand prepared using mixed probiotic strains (L. acidophilus and L. rhamnosus) during storage. The reason may be because the mixed probiotic strains produced more acidity than the single strain. Storage studies revealed that the probiotic shrikhand had good storage stability during the period of study (30 days) at refrigeration temperature (4°C).The probiotic shrikhand could thus serve as good carrier of probiotics to improve gut health and may also be recommended for lactose intolerant individuals.
2019-03-17T13:08:44.029Z
2017-11-20T00:00:00.000
{ "year": 2017, "sha1": "ca87eadbfc329595129d9800fc3a3d1f94843a94", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-11-2017/R.%20Sivasankari,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3bf56a1cd36d2d34b22def03416fd01999098945", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
253482880
pes2o/s2orc
v3-fos-license
DNA methylation-based measures of biological aging and cognitive decline over 16-years: preliminary longitudinal findings in midlife DNA methylation-based (DNAm) measures of biological aging associate with increased risk of morbidity and mortality, but their links with cognitive decline are less established. This study examined changes over a 16-year interval in epigenetic clocks (the traditional and principal components [PC]-based Horvath, Hannum, PhenoAge, GrimAge) and pace of aging measures (Dunedin PoAm, Dunedin PACE) in 48 midlife adults enrolled in the longitudinal arm of the Adult Health and Behavior project (56% Female, baseline AgeM = 44.7 years), selected for discrepant cognitive trajectories. Cognitive Decliners (N = 24) were selected based on declines in a composite score derived from neuropsychological tests and matched with participants who did not show any decline, Maintainers (N = 24). Multilevel models with repeated DNAm measures within person tested the main effects of time, group, and group by time interactions. DNAm measures significantly increased over time generally consistent with elapsed time between study visits. There were also group differences: overall, Cognitive Decliners had an older PC-GrimAge and faster pace of aging (Dunedin PoAm, Dunedin PACE) than Cognitive Maintainers. There were no significant group by time interactions, suggesting accelerated epigenetic aging in Decliners remained constant over time. Older PC-GrimAge and faster pace of aging may be particularly sensitive to cognitive decline in midlife. This preliminary study examined overall levels and changes in traditional and PC-based first-and secondgeneration epigenetic clocks and pace of aging measures in participants selected from a larger prospective cohort to represent extremes of maintained and declining cognitive function (termed Maintainers and Decliners, respectively) between a baseline visit when participants were in midlife and a second visit approximately 16 years later. We hypothesized that overall, cognitive Decliners would be biologically older compared to cognitive Maintainers. We also explored whether cognitive Decliners would show faster biological aging (i.e., steeper increases in DNAm over time) compared to cognitive Maintainers; and whether particular cognitive domains associated more strongly than others with measures of biological aging. We expected that PC-based clocks of enhanced reliability would outperform traditional clocks and that secondgeneration clocks and pace of aging measures trained to predict morbidity, mortality, and multi-system decline would outperform first-generation clocks optimized for age prediction. Notably, we tested several DNAm measures because a comparative analysis approach is recommended to simultaneously evaluate the utility of many DNAm measures and determine which ones are associated with aging outcomes of interest [17]. RESULTS Neuropsychological tests were administered and biological age was estimated at both time 1 (T1) and time 2 (T2) for 24 people who declined in cognitive function (Decliners) and 24 who maintained cognitive function (Maintainers) from T1 to T2 (mean years between assessments = 15.9, range: 15.4 to 16.9), selected using an extreme groups approach (see Methods). Table 1 summarizes study participant characteristics. Decliners and Maintainers did not significantly differ on chronological age, sex, education, race, body mass index, smoking status, or T1 cognition (a composite score derived from neuropsychological tests for spatial reasoning, working memory, processing speed, executive function, and attention; see Methods). Decliners' cognitive composite decreased from T1 to T2 (T1M = 67.61; T2M = 53.89, p < 0.001) whereas Maintainers' cognitive composite did not change over time (T1M = 66.48; T2M = 67.56, p = .189). The observed cognitive decline was more than a standard deviation decline, a clinically noticeable change in cognitive performance associated with risk for future cognitive impairments. Normative values on several neuropsychological tests were further examined to contextualize changes in the cognitive composite. As the sample performed above average at T1, the Decliners' change can be interpreted as moving from above average to average, whereas the Maintainers remained slightly above average at both time points (see Supplementary Results). All individuals in the Decliner and Maintainer groups denied being diagnosed with dementia. Adjudications were not performed, so clinical determinations regarding mild cognitive impairment (MCI) cannot be made. Table 2 DNAmAA measures were smaller within each time point, with the exception of Dunedin PoAm-AA and Dunedin PACE-AA, which were more strongly correlated with GrimAgeAA (r = .69-.77) and PC-GrimAgeAA (r = .68-.76), as well as with PhenoAgeAA (r = .46-.59) and PC-PhenoAgeAA (r = .37-.57) at T1 and T2. Time and group main and interacting effects on DNAm The traditional and PC-based epigenetic clocks and pace of aging measures significantly increased over time, generally consistent with or underestimating the time elapsed between study visits (Table 3 and Supplementary Table 1). With respect to group Exploring specific cognitive components on DNAm To further explore whether the several components of cognitive functioning associated differentially with PC-GrimAge and pace of aging measures, we conducted secondary analyses using the same adjusted multilevel model predicting T1 and T2 DNAm, but instead of the categorical Group predictor, we tested the continuous scaled version of each cognitive component at T2 to determine which cognition component(s) were significantly associated with DNAm-based measures of biological aging. We focused on T2 cognitive components because this was the time point that differentiated the two groups (see Supplementary Table 3). Results are depicted in DISCUSSION This is the first report to explore changes over time in several of the latest DNAm biological aging measuresincluding traditional and PC-based epigenetic clocks and pace of aging measuresin an age-, race-, sex-, education-, cognition-, and body mass index-matched case control comparison and where cases were selected for having cognitive performance declines on objective neuropsychological tests. There were no group differences in DNAm slopes over time, which may be due to low statistical power, but is in line with the few previous studies that have examined only first-and second-generation epigenetic clocks [6][7][8][9]. However, cognitive decline was related to an overall older PC-GrimAge and a faster pace of aging (Dunedin PoAm and Dunedin PACE) compared to those without cognitive decline over this 16-year time frame. These group differences remained statistically significant when corrected for multiple comparisons at a false discovery rate of .10. There was no evidence of associations between the firstgeneration epigenetic clocks and cognitive decline. Rather, our findings point to the second-generation clock PC-GrimAge as being more sensitive to cognitive change, which aligns with others who report associations between GrimAge, but not Horvath or AGING Hannum, and worse cognitive performance crosssectionally [19], worse future cognitive performance [8], and cognitive decline from adolescence to age 45 [3] and from age 70 to 79 [20]. Notably, we did not observe associations with (PC)-PhenoAge and cognitive decline, which may be due to limited power, but is also consistent with other reports [3,8]. Although PhenoAge and GrimAge are both second-generation clocks, they differ in how they were trained: PhenoAge was created by identifying CpGs that predict a composite measure of mortality-related blood biomarkers (see Supplementary Materials for biomarker list) and chronological age [14]. Conversely, GrimAge was created by generating DNAm surrogates of morbidityand mortality-related plasma proteins (see Supplementary Materials) and smoking pack-years; then time-to-death was regressed onto these DNAm surrogates, chronological age, and sex to identify the CpGs [12]. The blood-based biomarkers across both epigenetic clocks reflect the functioning of similar physiological systems (e.g., immune, kidney, metabolic), but GrimAge also explicitly includes the effects of smoking, which is an established risk factor for cognitive decline and dementia [21]. In addition, of the first-and second-generation clocks, GrimAge and PC-GrimAge tend to have the highest reliability due to its two-step DNAm calculation [3,15]; thus, this measurement property may also explain why GrimAge tends to outperform other clocks, including PhenoAge. However, these reasons remain speculative and future studies with DNAm data should continue to evaluate and report associations across multiple DNAm measures (including the newest pace of aging measures, below) to facilitate comparison across studies, reconcile inconsistencies, and facilitate their inclusion in future meta-analyses and systematic reviews. In addition to PC-GrimAge, faster pace of aging was associated with cognitive decline. This report is the first to replicate Belsky and colleagues' [2,3] findings of Dunedin PoAm and Dunedin PACE associating with cognitive decline. Our findings suggest that pace of aging measures, which were developed from Dunedin Study participants aged 26-45, can inform cognitive outcomes in middle-aged and older adults. Pace of aging measures may be particularly sensitive to preclinical cognitive changes because they are indexed by a longitudinal panel of biomarkers across multiple physiological systems, which may more closely reflect the mechanisms of cognitive decline, relative to firstgeneration epigenetic clocks that are optimized for age prediction. Interestingly, the epigenetic clocks that pace of aging was most strongly correlated with at T1 and T2 were GrimAge and PC-GrimAge (Figures 1, 2), suggesting that these DNAm measures may be detecting some shared biological aging signals. A limitation to the current DNAm measures is a lack of mechanistic understanding of their underlying biology. Current work is underway to deconstruct these DNAm composite measures into distinct "modules" that may reflect functionally related biological changes [22]. Each epigenetic clock is comprised of differing proportions of CpGs from a given module; however, in line with our findings, GrimAge and DunedinPoAm share a similar composition of modules and have higher quantities of modules that are stronger predictors of morbidity and mortality, as compared to PhenoAge, Horvath, and Hannum [22]. Continued efforts to examine the underlying mechanisms of DNAm measures will aid our understanding of why certain clocks outperform others in predicting health outcomes, including cognitive health. All DNAm measures significantly increased over time; however, these estimates of biological aging did not increase between T1 and T2 more steeply in Decliners, compared to Maintainers, as evidenced by the absence of a significant group by time interaction. In other words, DNAm estimates of biological aging were associated with the 16-year change in cognitive functioning, but did not progress more rapidly in Decliners than among Maintainers, which may suggest that Decliners' accelerated profile of epigenetic aging was established prior to the initial assessment. However, we note that we had limited power to detect small and moderate effects (particularly interaction effects); therefore, we cannot confidently infer whether the non-significant group by time interactions are due to truly null effects and/or due to the smaller sample size. In exploring whether particular cognitive domains may covary with PC-GrimAge and pace of aging measures more strongly than others, executive function showed the most consistent associations, as well as withstanding correction for multiple comparisons. One previous report links older epigenetic age estimated from other clocks, including Horvath's intrinsic and Hannum derived extrinsic epigenetic age acceleration and PhenoAge, but not GrimAge, to poorer executive function in African Americans with HIV and a control group [23]; others report null associations between GrimAge and executive function composites [24,25], and between Dunedin PACE and one test of executive function, Trails B [26]. Therefore, converging evidence for associations between DNAm and specific cognitive domains remains inconclusive. Future studies will benefit from investigating separate cognitive domains (in addition to general composites, which is more commonly done), to shed light on which components of cognition may be more or less affected. AGING The current study focused on neuropsychologicallyassessed cognitive decline, which can indicate future risk for dementia [27]. Indeed, in other studies, DNAm measures predicted MCI and clinical diagnosis of Alzheimer's Disease (e.g., [26,28]). No participants in our sample reported having a dementia diagnosis, but adjudications were not performed, so MCI status could not be assessed. However, descriptively, the group with cognitive performance decrements over time experienced greater than a standard deviation change in their average composite score, an indication they may be at future cognitive risk, with their T2 assessments falling slightly below normative values on several neuropsychological tests (see Supplemental Results). It remains unclear whether these individuals will manifest future cognitive impairments, but this magnitude of decline is considered clinically meaningful [29]. Strengths of this study include the longitudinal design with a relatively long follow-up of 16 years; the comprehensive assessment of cognition across several domains known to decline with age; and the recommended analysis of multiple DNAm measures [17] that allowed for comparisons across traditional and PC-based epigenetic clocks and pace of aging measures. However, this preliminary study had limited power to detect small and moderate effects (particularly interaction effects), although we maximized our ability to detect effects by selecting cognitive groups from the tails or extremes of the distribution of cognitive change. In addition, the cognition composite approach used to identify Cognitive Decliners vs. Maintainers assumed that the neuropsychological tests have the same meaning and factor structure across the 16-year time frame in both groups; our smaller, multi-group sample does not meet sample size recommendations for testing measurement invariance [30,31]. However, using a latent variable approach and testing measurement invariance is an important future direction for cognitive change research, and may yield stronger effects than a composite approach (e.g., [32]). Other limitations include only two time points for longitudinal analysis; limited generalizability in terms of education and race; and DNAm measured in blood but not the brain, although blood-brain global DNAm profiles are highly correlated (r = .86) [33]. In conclusion, these preliminary results suggest PC-GrimAge and DNAm based pace of aging measures (Dunedin PoAm and PACE) associate with 16-year, neuropsychologically-validated cognitive decline in midlife. The results warrant a larger-scale study to better examine longitudinal associations between changes in DNAm measures and changes across multiple cognitive domains. Ultimately, establishing DNAm measures as biomarkers of cognitive function in midlife may offer pre-clinical markers of a molecular aging mechanism that can help identify individuals at increased risk for cognitive impairment and dementia in later life. Participants Participants were selected from a longitudinal arm of the Adult Health and Behavior (AHAB)-1 study, which comprises a registry of behavioral and biological measurements for the study of midlife individual differences [34]. AHAB-1 participants were first recruited at 30-54 years of age via mass-mail solicitation from southwestern Pennsylvania and were relatively healthy. Study exclusions at the time of initial recruitment (time 1) were a reported history of atherosclerotic cardiovascular disease, chronic kidney or liver disease, cancer treatment in the preceding year, and major neurological disorders, schizophrenia, or other psychotic illness. Other exclusions included pregnancy and reported use of insulin, glucocorticoid, antiarrhythmic, psychotropic, or prescription weightloss medications. Baseline (T1) assessments occurred between 2001 and 2005 and follow-up (T2) assessments began in 2017 and are ongoing, with additional subjects being added at the time of writing. Selection of participant groups Using an extreme groups approach, a subset of AHAB-1 participants was selected for the current study: 24 Cognitive Decliners (i.e., those who showed the most decline in cognition from T1 to T2 based on changes in a cognitive composite score, described below) and 24 matched Cognitive Maintainers (i.e., those who maintained cognitive composite levels from T1 to T2, matched to Decliners on demographics and health). The selection was carried out in the following steps: First, from the 300 available AHAB-1 participants with both T1 and T2 data who were enrolled for follow-up (T2) evaluation between June, 2017 and March, 2020, we excluded those who reported medical conditions having potential cognitive sequelae, as might be associated with Alzheimer's disease, stroke, transient ischemic attack, multiple sclerosis, Parkinson's disease, epilepsy, brain cancer, or brain cyst, and people who endorsed having a head injury, concussion, or spinal cord injury. We also excluded people with diagnosed diabetes or HbA1c greater than or equal to 7%; individuals who reported exposure in the previous 12 months to any of the neurocognitive tests administered here; were missing more than 3 of 10 cognitive measurements used in the present analyses; or for whom we lack a stored T1 blood sample sufficient for DNA extraction and AGING DNAm profiling. These exclusions resulted in 167 remaining participants. From the 167, we selected the 24 most extreme cognitive decliners, identified using the cognitive composite (described below). Next, we identified the 50 most extreme cognitive maintainers, and from those 50, matched on sex, race, T1 age, T1 education, T1 cognitive composite, and T1 body mass index to obtain the matched 24 cognitive maintainers. One-to-one multivariate matching based on Mahalanobis distance was performed using the Match function in R (Matching package) [35]. Matching was performed without replacement and by randomly breaking ties. Groups (Decliners, Maintainers) were identified blind and prior to assessment of DNAm measures. Procedure Sociodemographic, cognitive, psychosocial, and instrumented biological measurements were collected over multiple study visits at both T1 and T2. At T1, the neuropsychological tests used in the present analyses were administered at visit 1 and blood was drawn at visit 2. On average, there were 30.85 days between visits 1 and 2 for the sample analyzed (median = 25.5, range: 2 to 98). At T2, the neuropsychological tests used in the present analyses were administered at visits 2 and 3 and blood was drawn at visit 2. On average, there were 26.1 days between visits 2 and 3 for the sample analyzed (median = 16.5, range: 8 to 102). AHAB was approved by the University of Pittsburgh Institutional Review Board, and all participants provided written informed consent. Demographic and health characteristics Self-reported sex, race, years of education, and smoking status were assessed. Measures of height and weight were obtained to determine body mass index (in kg/m 2 ). Cognition T1 and T2 neuropsychological tests used in the present analyses capture several domains of cognitive function: spatial reasoning, working memory, visuomotor processing speed, executive function, and attention. A cognition composite was used (described below). Spatial reasoning The Matrix Reasoning subtest from the Wechsler Abbreviated Scale of Intelligence [36,37] was used to assess spatial perception and reasoning. This test involves viewing an incomplete matrix and selecting the response option that completes the matrix. Higher scores correspond to better spatial reasoning. Working memory Working memory was assessed with the Digit Span subtest from the Wechsler Adult Intelligence Scale -III (WAIS-III) [37]. The participant is read sequences of numbers and is asked to recall the numbers in the same order (forward) or in reverse order (backward). Higher scores indicate better working memory. Visuomotor processing speed Participants completed the first parts of the Trail Making Test [38] and the Stroop Color-Word Test [39] to assess processing speed. Part A (in seconds) of the Trail Making Test requires participants to draw a line connecting circles numbered from 1 to 25 as quickly as possible. Higher scores correspond to poorer processing speed. The first two parts of the Stroop Color-Word Test require participants to (A) read aloud a list of color names (i.e., red, green, blue) printed in black ink and (B) name the colors of the inks (i.e., "XXXX" written in blue ink) as quickly as possible. Scores are the number of correct responses within a 45-second period, with higher scores indicating better performance. Executive function Participants were administered two tests of executive functioning: task switching on Part B of the Trail Making Test [38] and the interference score of the Stroop Color-Word Test [39]. The Trail Making Test Part B requires subjects to draw a line connecting numbered and lettered circles as quickly as possible, alternating between numbers and letters in ascending numerical and alphabetical order (e.g., 1-A-2-B-3-C…, etc.). To derive a measure of executive function relatively independent of psychomotor speed, time to completion of Part B is subtracted from Part A, such that higher scores indicate better performance. Assessing ability to resist cognitive interference, the Stroop Color-Word Test requires subjects to read aloud as quickly as possible from 3 pages of color word lists: pages 1 and 2 provide tests of processing speed, previously described. On Page 3 individuals are asked to report the color of the ink used to print the name of incongruent colors (e.g., "blue" for blue ink used to spell the color name "red"), thus requiring participants to inhibit a prepotent response (color word naming). Scores are the number of correct responses within a 45-second period, with higher scores indicating better performance. Attention Digit Vigilance pages 1 and 2 [40] was administered to assess vigilant visual tracking and capacity for sustained attention. This test requires participants to rapidly scan a page of numbers arrayed in rows and to cross out only digits designated as targets as quickly as possible. Time (in seconds) was recorded. Higher scores correspond to lower performance. AGING Cognition composite A cognition composite was calculated using raw (not standardized or normed) test scores. First, the Trail Making Test Part A and Digit Vigilance Times were multiplied by (-1) so that higher scores correspond to better performance; then the proportion of maximum scaling approach [41] was applied to the individual subtests. This approach transforms each score to a metric from 0 (minimum observed) to 1 (maximum observed) by first transforming the score range from 0 to the highest observed value and then dividing by the highest observed value. The resulting value between 0 and 1 was multiplied by 100. This approach does not change the multivariate distribution and covariate matrix of the transformed variables and is the recommended approach for longitudinal data [42]. The scaled individual tests (Matrix Reasoning, Digit Span forward and backward, Trail Making Test A and A-B, Stroop word, Stroop color, and Stroop colorword, and Digit Vigilance pages 1 and 2) were averaged together to create a cognition composite using all available data. At T1, no cognition data were missing. At T2, 1 participant was missing the Stroop test and 19 were missing Digit Vigilance pages 1 and 2 and 1 was missing just page 2. Higher composite scores indicate better cognition. Notably, this composite approach assumes that the individual neuropsychological tests have the same meaning and factor structure over time. The composite's multilevel reliability was calculated using coefficient omega (omegaSEM function in the multilevelTools package) and was adequate at both the between-(ω = .80, 95% CI [.62, .98]) and within-person levels (ω = .85, 95% CI [.79, .91]). Tissue acquisition and processing Fasting blood was collected by a trained phlebotomist between 8:00am and 10:00am. Whole blood samples were frozen in −80°C until time of DNA extraction and analysis. DNA was extracted using the DNeasy Blood and Tissue Kit (Qiagen) at the UCLA Cousins Center for Psychoneuroimmunology. Purified DNA was concentrated using GeneJET PCR Purification Kit (Thermo Fisher) and suspended in the elution buffer to a minimum of 12.5 ng/ul before plating in a 96-well plate. DNA was quantified using the Quant-iT dsDNA Assay Kit, high sensitivity (Invitrogen). Consideration for variability across assay chips was addressed by organizing samples from the same individual to be placed together on the same chip but randomly assigned by ID. In addition, samples from Decliners and Maintainers were assured to be evenly distributed within each chip, and position within chip was randomized. DNA methylation data pre-processing Bisulfite conversion using the Zymo EZ DNA Methylation Kit (ZymoResearch, Orange, CA, USA) and subsequent hybridization of the Human Methylation 850 K EPIC chip (Illumina, San Diego, CA, USA) and scanning (iScan, Illumina) were performed by the UCLA Neuroscience Genomics Core facilities according to the manufacturer's protocols. DNA methylation image data were processed in R statistical software (version 4.1.1) using the minfi Bioconductor package (version 1.38.0) [43]. We checked for samples with >1% of sites with detection p-values >0.01 (n = 0) and for samples with DNA methylation predicted sex discordant with recorded sex (n = 0). The minfi preprocessNoob function was used to normalize dye bias and apply background correction before obtaining methylation beta-values. Covariates Analyses were adjusted for participant age and sex. Additionally, because DNAm profiles may differ between cell subtypes [44] and cell composition changes with age, the percentages of six cell subtypes (CD8 total, CD4 total, NK cells, plasma blasts, monocytes, and granulocytes) were estimated from Horvath's website using the Houseman method [45] (and see [46] for validation) and further controlled for in sensitivity analyses. Some may consider controlling for cell subtypes to be unnecessary adjustment or overadjustment because cell subtypes may contribute to the observed differences in DNAm or be on a mediation pathway linking DNAm to aging outcomes; however, we present results both ways for interested readers. Data analysis All analyses were conducted using the traditional and PC-based epigenetic clocks and pace of aging measures. Further mention of DNAm refers to all measures unless specified. The DNAm measures were modeled individually in two multilevel models with repeated measures nested within person. Model 1 included the main effect of group (Maintainers, Decliners) and time (T1 and T2) on DNAm. Model 2 included the interaction between group and time to explore group differences in change in DNAm over time. All models controlled for baseline chronological age (grand mean centered at 44.79 years) and sex (0 = male, 1 = female, as a factor variable). Notably, because these statistical models control for level 2 (time-invariant) chronological age and include level 1 (time-varying) time as a predictor, our findings can be considered in terms of "age acceleration", which in cross-sectional studies is achieved by controlling for chronological age or outputting residuals from DNAm age regressed on chronological age. Sensitivity analyses further controlled for the percentages of six cell subtypes (CD8 T cells, CD4 T cells, NK cells, plasma blasts, monocytes, and granulocytes), treated as time-varying covariates. Statistical analyses were conducted in R version 4.1.1 using the nlme package (version 3.1.152). The variancecovariance structure was modeled as a random intercept in all models. Gamma weights (γ), analogous to unstandardized beta weights (i.e., a 1-unit change in the predictor [Decliner vs. Maintainer, or T1 vs. T2] is associated with γ-year change in the outcome), are reported with their 95% confidence intervals (CIs) in tables. We adjusted for multiple comparisons using the Benjamini-Hochberg (BH) correction (using the p.adjust function in R) [18]. To examine different levels of stringency, false discovery rates (FDRs) of .05 and .10 were calculated and chosen to ensure no true discoveries were missed while balancing the number of false positives. FDRs can be interpreted as the expected proportion of false positives among all statistically significant tests. Power considerations We selected 24 participants per group to balance funding constraints with generating preliminary data. Although we maximized our ability to detect effects by selecting cognitive groups from extremes of the distribution of change in cognitive performance, the smaller sample size affects our power nonetheless. There is no conventional method for computing power in a multilevel model; however, for a parallel two-group independent t-test with 24 participants per group and alpha set to .05, power of 0.80 can detect approximately Cohen's d = 0.82 (see power curve plotted in Supplementary Figure 1). Therefore, the current study was powered to detect large effects for comparing DNAm measures between groups; we had low statistical power to explore group by time interactions on DNAm measures. Overview of DNAm clocks The DNAm clock measures were developed using supervised machine learning techniques to derive algorithms that capture DNAm patterns that predict a dependent variable of interest, or a surrogate of "biological age". The dependent variables differ across the different types of clocks. First-generation clocks The first-generation clocks were trained to predict chronological age. Hannum et al. [1] developed an epigenetic clock (71 CpGs) using whole blood samples from 656 individuals (426 Caucasian and 120 Hispanic) aged 19 to 101. The Hannum clock used in the current study does not include cell distribution data. However, for completeness, there is a version of the Hannum clock known as extrinsic epigenetic age acceleration (EEAA) that is a weighted average of Hannum's estimate with naïve and exhausted CD8 T cells and plasma blasts and adjusted for chronological age [2]. Horvath [3] developed a multi-tissue epigenetic clock (353 CpGs) from 8,000 samples (82 different datasets) representing people across the lifespan. The Horvath clock used in the current study does not include cell distribution data; there is a version of the Horvath clock defined as the residual resulting from regressing Horvath's DNAm age on chronological age and 7 blood cell types (naïve and exhausted CD8 T cells, plasma blasts, CD4 T cells, NK cells, monocytes, and granulocytes) and is known as intrinsic epigenetic age acceleration (IEAA) [4]. Second-generation clocks The second-generation clocks were optimized for lifespan prediction. Levine et al. [5] proposed the "PhenoAge" clock, which was developed in two steps. First, using data from the National Health and Nutrition Examination Survey (9,926 people ages 20 and over), they developed a measure of "phenotypic age" by selecting from 42 blood-based clinical markers those that predicted mortality. Based on this analysis, 9 blood-based clinical markers (see table below) and chronological age were selected and combined into a phenotypic age estimate and validated in a new sample to predict allcause mortality. In the second step, data from 465 participants aged 21-100 years in the Invecchiare in Chianti (InCHIANTI) study were used to regress phenotypic age on CpG sites. From this, the PhenoAge clock (513 CpGs) was developed, which strongly relates to all-cause mortality and aging-related morbidity [5]. Principal components (PC)-based clocks Traditional epigenetic clocks use individual CpG sites as inputs to the epigenetic age algorithms, but individual CpGs are unreliable and noisy [9]. Therefore, Higgins-Chen et al. proposed [10] that principal components analysis (PCA) can be used to enhance the reliability of traditional epigenetic clocks by extracting shared systematic variation across CpG sites (principal components, PCs) and feeding those PCs into the elastic net regressions to predict chronological age or other health phenotype. Higgins-Chen et al. provides R code that has users project their own DNAm data onto the original PCA space, which then allows PC-based clock outcomes to be estimated from new data. PC-based clocks show agreement between technical replicates (the same sample measured twice) within 0 to 1.5 years and more stable trajectories in longitudinal studies [10]. PC-based clocks have been used in other published studies (e.g., [11]).
2022-11-13T16:13:14.621Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "e774a5dbc8a29d714709c57a09bf6754e5fe34bb", "oa_license": "CCBY", "oa_url": "https://www.aging-us.com/article/204376/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2278fbfa88a55fe8c3dc3c4319f666a36bc81a60", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2759996
pes2o/s2orc
v3-fos-license
A Microfluidic Platform Containing Sidewall Microgrooves for Cell Positioning and Trapping Microfluidic channels enable the control of cell positioning and the capturing of cells for high-throughput screening and other cellular applications. In this paper, a simple microfluidic platform is proposed for capturing small volumes of cells using sidewall microgrooves. The cell docking patterns in the channels containing sidewall microgroove are also studied. Both numerical and experimental investigations are performed within channels containing sidewall microgrooves of three different widths (i.e., 50, 100 and 200 μm). It is observed that channels containing sidewall microgrooves play an important role in regulating cell positioning and patterning. The obtained results revealed that 10 to 14 cells were positioned inside the sidewall channels of 200 μm width, two to five cells were positioned within the channels of 100 μm width, and one to two individual cells were docked within the sidewall channel of 50 μm width. Particle modelling shows the prediction of cell positioning within sidewall microgrooves. The positions of cells docked within microgroove-containing channels were also quantified. Furthermore, the shear stress variation and cell positioning in the sidewall microgrooves were correlated. Therefore, these sidewall microgroove-containing channels could be potentially useful for regulating cell positioning and patterning on two-dimensional surfaces, three-dimensional microenvironments and high-throughput screening. Cell patterning and positioning are of great importance in many biological applications, such as drug screening and cell-based biosensing. Introduction Microfluidic platforms hold great promise for biochemical synthesis, high-throughput drug screening, and cell-based biological assay [1][2][3][4][5]. Microfluidic devices offer the possibility of controlling fluid flow, generating stable concentration gradients and regulating cell-soluble factor interaction in a temporal and spatial manner [6 -9, 40]. The poly(dimethylsiloxane) (PDMS)-based microfluidic devices offer a number of advantages, such as low cost, short reaction time, high-throughput analysis, and realtime monitoring of biological processes [10][11][12][38][39][40]. Furthermore, the microfluidic devices enable the control of cell docking and immobilization in a well-defined microenvironment, features necessary for cell-based screening applications [13][14][15][16][17][18]. Moreover, cell patterning and positioning are of great importance in many biological applications, such as drug screening and cell-based biosensing [39]. It has been shown that microfluidic devices containing shear-protective microgrooved regions located at the bottom of the substrate have the ability to control cell positioning [15][16]. The previous microgrooved regions located at the bottom of the channels provided shearprotective regions and regulated micro-circulation, resulting in cell docking and positioning; however, these approaches have some limitations, such as the fact that cells were attached to the bottom substrates so that it might be difficult to control the docking of a small number of cells. To overcome these challenges, we consider sidewall microgroove-containing channels to regulate cell docking and positioning. By using the sidewall microgrooves in the microchannels, we enable the capture of a small number of cells within the microfluidic device, showing more control over cell docking and positioning. Furthermore, it may be possible to co-culture different cell types in the sidewall microgrooved channels. It has been shown that a microfluidic system containing high-quality, small volumes of cells is required for studying quantitative system biology [19][20]. For example, a cup-shaped, high-density hydrodynamic cell isolation microfluidic device has been previously developed [19]. Individual cells were docked within cupshaped microstructures and single-cell enzymatic kinetics was analysed. Two-layer cup-shaped arrays allow for the fluidic streamlines necessary for the cell trapping. When one cell was occupied within a cup-shaped array, the flow was diverted and then another cell was trapped within a neighbouring cup-shaped array. Furthermore, a microfluidic cell pairing device has been developed to study electrical fusion analysis [20]. Two cell types were captured and paired in two cup-shaped cell isolation microfluidic devices containing a larger capture cup and a smaller backside capture cup. Although this microfluidic channel enables the capture of individual single cells within both the larger front-side cup and the smaller back-side cup, a complex three-step cell loading is required. Moreover, a multilayer microfluidic device with permeable polymer barriers for the capture and transport of cells with microvalves was developed, which requires alignment and a complex fabrication process [41]. In contrast, our proposed microfluidic platform provides significant advantages over these methods. (i) It is very simple; (ii) it is one-layer; (iii) it provides a platform for capturing a small number of cells; (iv) it uses a one-step microfabrication process which does not require any alignment between the bottom substrate and the microfluidic channel; and (v) it allows for highdensity microscopic analysis. In this paper, a microfluidic device containing sidewall microgrooves that enables the trapping and positioning of cells in a controlled manner is developed. Furthermore, cell positioning in sidewall microgrooves is analysed. The effect of the cell docking and positioning on the sidewall microgroove-containing channels is also investigated. Computational simulations provided estimates of particle tracing patterns, which were accurate proxies for cell positioning. Computational modelling is compared to the experimental results of cell docking within the sidewall microgrooved channels. The particle trajectory was also predicted in sidewall microgrooves containing square microgeometry. Both numerical and experimental results are presented to demonstrate that the proposed microfluidic device containing sidewall grooves in the microchannel could be a potentially useful tool for studying the docking and positioning of small numbers of cells, down to one to two individual cells. Fabrication of the microfluidic device containing sidewall microgrooves Microfluidic devices with sidewall microgrooves were fabricated using the photolithography technique that has been previously developed [21][22][23][24] (Figure 1). The silicon master mould was made using a negative photoresist (SU-8 2050, Microchem, MA). To make sidewall microgroove patterns of 80 µm thickness, SU-8 2050 was spin-coated at 1,500 rpm for 60 sec, baked for 8 min and 25 min at 65 °C and 95 °C, respectively, and exposed to UV for 3 min. After UV exposure, the photoresist-patterned silicon master was post-baked for 1 min and 8 min at 65 °C and 95 °C, respectively. The negative replica of the microfluidic channel was moulded in poly (dimethylsiloxane) (PDMS) (Sylgard 184 Silicon elastomer, Dow Corning, MI). The PDMS prepolymer mixed with silicone elastomer and curing agent (10:1) was poured on the master and cured at 70 °C for two hours. PDMS moulds were removed from the photoresistpatterned master. An inlet and outlet of the microfluidic channel were punched by sharp punchers for cell seeding and medium perfusion. The sidewall microgroovecontaining channel and the bottom PDMS substrate were irreversibly bonded using oxygen plasma (5 min at 30 W, Harrick Scientific, NY). Sidewall microgrooves in the microchannels were placed perpendicular to the fluidic flow direction in the microfluidic device. Cell docking in a microfluidic device NIH 3T3 mouse fibroblasts were cultured in Dulbecco's Modified Eagle Medium (DMEM, Invitrogen, CA) containing 10 % fetal bovine serum (FBS, Invitrogen) and 1 % penicillin/streptomycin (Invitrogen, CA). To seed the cells into the microfluidic sidewall channel, the cells were trypsinized and dissociated with culture medium. A counting chamber, also known as a hemocytometer, was used to obtain the cell density. The cells were seeded in a microfluidic device through a cell inlet port at the cell density of 6x10 6 cells/mL. After 20 min cell seeding, the medium was infused using a syringe pump at a flow rate of 5 µL/min. The medium was pumped to the inlet port of the microfluidic device (the obtained flow direction was from left to right, as illustrated by an arrow in Figure 2C). Image analysis for cell docking and retention Cell images were obtained using an inverted microscope (Nikon TE 2000-U, USA). To analyse cell docking within sidewall microgrooves in the microchannel, we obtained cell numbers and their location through image analysis. The average cell size in the microgrooves was quantified by ImageJ software. The size of the loaded 3T3 fibroblast cells was on average 10 µm. The experiments were performed with different microgroove sizes three times in a microfluidic device. Statistical analysis was performed using the student t-test. To estimate the cell penetration into the sidewall microgrooves, we performed a numerical simulation of our experimental setup. In our modelling, the unstructured mesh generation method was used for constructing the 3D mesh domain. Our fluid modelling is based on incompressible Navier-Stokes equations [36,37] with the Stokes hypothesis assumed in conservation form for an arbitrary geometry. The governing equations can be written as follows: Continuity equation: Momentum equation: where V → represents velocity (m/s), p pressure (Pa), ρ density (kg/m 3 ), μ dynamic viscosity of fluid (Pa Sec), and t time (sec). The properties of fluid (medium) in our modelling are considered to be the same as those of water; which implies the density of 1000 (kg/m 3 ) and dynamic viscosity of 0.001 (Pa Sec). For our numerical modelling, the boundary conditions at the walls and at the bottom of the microgrooves is set as noslip boundary conditions. The specified velocity condition is applied for the inflow boundary condition (Dirichlet boundary condition). Moreover, the specified pressure is used for the outflow boundary condition (Dirichlet boundary condition). The outlet static pressure of 0 (Pa) is applied for our case. Furthermore, the criteria for convergence (RMS residual) are considered to be equal to 10 -6 . Sidewall microgroove-containing channels A PDMS-based microfluidic device with sidewall microgrooves was developed to regulate and control cell positioning and docking ( Figure 1). This microfluidic device mainly consists of sidewall microgrooves (50x50, 100x100 and 200x200 µm) and posts (125, 250 and 375 µm radius) that enables control of flow velocity and shear stress profiles. Three types of microchannels (500, 1000 and 1500 µm widths) were fabricated. A microchannel with a 500 µm width has a 250 µm post diameter; a microchannel with a 1000 µm width has a 500 µm post diameter; and a microchannel with a 1500 µm width has a 750 µm post diameter. As shown in Figure 1B, our fluidic channel containing sidewall microgrooves was irreversibly bonded to a PDMS substrate. To analyse cell positioning within sidewall microgrooves, cells were seeded into a microfluidic device through a cell inlet port and medium was subsequently infused using a syringe pump. This microfluidic device has several advantages over previous cell docking microfluidic platforms, because we can regulate the docking of small numbers of cells, down to one to two individual cells, fewer than in the earlier studies [15][16]. The one-step microfabrication process we used in this paper did not require any alignment between microgrooves and the microfluidic channel layer, whereas it is an essential part of other approaches [19,41]. Sidewall microgrooves were fabricated in the channels to analyse cell docking behaviour without any of the gravity effect which is usually generated within bottom-microgrooved channels [15][16]. It is shown that the cell docking in bottom-microgrooved channels is significantly regulated by the gravity and shear stress profiles [15][16]. Thus, it is not easy to identify which parameter is more important to regulate and control cell docking and positioning. To address this issue, sidewall microgrooves were developed in the microchannels. Additionally, to better understand the effect of geometrical factors, four different parameters were varied, which include post radius (R p : 125, 250, 375 µm), channel width (W c : 500, 1000, 1500 µm), microgroove length (L g : 50, 100, 200 µm), and microgroove depth (W g : 50, 100, 200 µm). The geometrical parameters involved are shown in Figure 1B. These four spatial variables were also scaled by the microgroove width W g which is equal to the microgroove length L g in our studies. Moreover, two dimensionless ratios were defined W c * =W c /W g , and R p * =R p / W g . As a result, the number of geometrical factors involved was reduced to two dimensionless ratios. The obtained values of R p * corresponding to the microfabricated microfluidic devices are 0.625, 1.25 and 1.875, respectively. We experimentally and theoretically evaluated the effect of these different parameters for cell docking and positioning within sidewall microgroove-containing channels. Cell positioning within sidewall microgrooves Cell docking and positioning were analysed within a microfluidic channel containing sidewall microgrooves (50, 100 and 200 µm widths) ( Figure 2). The distance between the centres of one microgroove to the next one is equal to 1.5 W g . Figure 2A-C represents the cell distribution within the sidewall microgrooves. Through an image analysis approach, the number of cells and their position within sidewall microgroove channels were obtained. As discussed earlier, the flow direction is presented in Figure 2C by an arrow, and is from left to right. Hence, the centre, upstream and downstream of the sidewall microgrooves are classified based on the flow direction and position of the post. Cell docking analysis showed that cell docking was significantly regulated by the geometry (i.e., groove width) of sidewall microgrooves. Figure 2D, E, F provides an easy comparison of cell counts for three different sidewall microgrooves (50, 100 and 200 µm in widths). It was found that different numbers of cells were docked within three sidewall microgrooves. It was revealed that two to five cells were positioned within 100 µm wide sidewall microgrooves, while 10 to14 cells were docked within 200 µm wide sidewall microgrooves in a microfluidic device with a 500 µm channel width. However, only a few cells (one to two) were docked within 50 µm wide sidewall grooves. Cell docking results demonstrated that the number of cells docking within larger sidewall microgrooves (200 µm in width) is much higher than that of cells docking within smaller sidewall microgrooves (50 µm in width). Significant differences between the number of cells docked in the microfluidic device with 500 µm and 1500 µm channel widths were not observed. This could be related to the small size of the sidewall groove relative to the width of the channel itself. Not much difference in cell docking among the sidewall microgrooves of upstream, centre and downstream was observed. This can be explained by the fact that the post is located far away from the sidewall microgrooves. If the distance between a post and sidewall microgroove was short, the number of cells docking at the centre of the sidewall grooves might be higher compared to the upstream and downstream of the sidewall microgrooves. To confirm this hypothesis, fluidic flow and shear stress profiles were simulated. The obtained results for simulation are discussed in the theoretical modelling section. Generally, it was observed that cells were positioned and located at the centre of the sidewall micro-grooves. Therefore, the obtained result for the sidewall microgrooves should prove useful for co-culturing different cell types. To support our experimental data, cell docking and positioning were analysed using histograms. Figure 3 represents the two-dimensional projection of the 3D histogram for different microgroove sizes and channel widths. For comparison purposes, the length and width of all microgrooves for each channel width, 500, 1000 and 1500 µm, was normalized in Figure 3. The aforementioned histogram verifies that the distribution probability of cells inside the sidewall microgrooves is higher in the highlighted regions. It is also observed that the histogram distribu-tion has a trend toward the central region of the sidewall microgrooves, and the probability of the cell docking is higher in the middle of the microgrooves. This observation indicates that cells will be docked within designated shearprotective sidewall microgrooves. Consequently, the cell docking and positioning can be regulated and controlled by this approach. It was noted that those cells that were not docked within the shear-protective region were removed by the medium perfusion. Micropost design considerations One of the distinct features of our microfluidic device relates to the incorporation of the microposts. These posts are aligned in the middle of the microchannels as shown in Figure 1A. The microposts play an important role in flow diversion. Moreover, these microposts facilitate the changing of the streamline patterns and velocity contours. A schematic presentation of flow diversion around the post is shown in Figure 4A. This diversion in the fluid flow should prove useful in case of delivering different drugs to the cells immobilized in the upper microgrooves as opposed to those residing in the lower microgrooves. The effect of the incorporation of the micropost on 3D particle simulation within the sidewall microgroove with and without micropost is illustrated in Figure 4B and C. In addition, the change in the velocity distributions within the sidewall microgroove with and without micropost is shown in Figure 4D and E. Furthermore, the effect of the inclusion of micropost and the effect of changing its diameter on the streamline distribution are demonstrated in Figure 5A, B and C. We note that by inclusion of the micropost the streamlines get closer together underneath the micropost. This streamline pattern change causes an increase in the fluid velocity below the micropost region while keeping the velocity in the microgroove area very low. Furthermore, through the particle simulation it was noted that inclusion of the micropost facilitates better particle penetration in the sidewall grooves. Theoretical modelling of the cell position A variety of numerical experiments for sidewall microgrooves were investigated. As mentioned earlier, to consider all the fabricated microfluidic devices, different geometry and channel sizes were simulated. Hence, three different microgroove sizes were considered. The series started from 200x200, continued to the size of 100x100, and concluded with the size of 50x50. The three-dimensional modelling of the sidewall microgrooves is considered since in our case studies the depth (perpendicular to the screen) to height ratio of the microgrooves was not above one. Therefore, it will not be justified to use two-dimensional modelling for the prediction of flow pattern and streamlines in our solution domains. In the modelling, the maximum Reynolds number was Re maxmax =0.375. This range of the Reynolds number is within the limit of laminar flow, or more precisely creeping flow, Re<1. Hence, the obtained experimental flow regime is in agreement with the presented numerical modelling and consistent with the assumptions made for analytical solution. The shear stress variation inside the groove is shown in Figure 6. Different sections in the groove have been considered. The shear stress variation is shown for different groove sizes and different channel widths. In the groove itself, three horizontals (upper, middle and lower part) are shown by letters a, b and c ( Figure 6). It was observed that the shear stress is one order of magnitude lower in the 50x50 groove size in comparison with 100x100 and 200x200 grooves. Moreover, there is not much variation from a-a, bb and c-c sections of the 50x50 groove size. It was also noted that there are two more peaks in the shear stress profile (cc section) of the 50x50 groove size in comparison with the others. This observation can be explained by considering the fact that there is a combined corner and wall effect on this small region, and also the velocity is much lower in this region. In contrast, when the groove size becomes larger at 100x100 and 200x200, as shown in Figure 6, the shear stress becomes higher in the (a-a) section or upper part of the microgroove, whereas it decreases in the middle and especially the lower part of the groove. This explains why most of the cells accumulate in the middle section of the groove. It can be concluded that there is a threshold window where the cells prefer to stay in that region. It can also be seen that the experimental observations are in good agreement with the numerical simulations. The cells lie in the region predicted by the numerical modelling and can be correlated to the histogram of cell positions provided in Figure 3. To study the effect of varying the channel width on the shear stress variation inside the microgroove, the micropost radius was kept constant and the width of the channel was changed. Simulations were run for the representative values of R p * (0.625, 1.25 and 1.875), which correspond to the microfabricated microfluidic devices. For each case study, the microchannel width W c * was chosen as between 1.25 to 5 to cover all the corresponding microfabricated geometries. The results obtained for the upper region of the microgroove (a-a section) are shown in Figure 7. The shear stress magnitude decreases by increasing the channel width, and this is applicable for all three microfabricated channel widths. As expected, as the microchannel becomes wider, the velocity and shear stress values decrease when all the parameters are constant. Furthermore, the obtained numerical results correlate with the effect of micropost size in the shear stress distribution and cell positioning. Therefore, microposts can play a role as a geometrical control over the cell positioning in the sidewall microgrooves. To further understand the effect of variation of the width of microgroove on cell penetration, a set of numerical simulations was performed for three different geometry setups with a microchannel width of Wc (250, 500 and 750 µm). Cell penetration defined by Z is shown in Figure 8. In all cases we assumed that the micropost centre and the groove are aligned perfectly, and the microgroove lengths were normalized. Generally, it was noted that by increasing the microgroove size, the cell penetration is increased. These graphs are in agreement with the obtained values in the previous graphs and data. Conclusions In this study, we developed a unique and simple microfluidic platform for capturing a small volume of cells using sidewall microgroove-containing channels and microposts. It was demonstrated that the micropost size has an effect on the shear stress distribution inside the microgrooves. It was also observed that microgroove size plays a key role in cell capturing and cell positioning. In addition, the numerical modelling for predicting cell positioning inside the microgroove is presented. The effect of channel width variation on cell penetration is also investigated. Furthermore, the histograms of cell locations in the microgrooves were provided, and the most probable destination of the cells was shown. Sidewall microgroovecontaining channels provide a platform for cell positioning and a shear-protected area for cell study, and are easily observable by a microscope. Hence, this simple yet adaptable microfluidic device should be useful for high-throughput screening, cell-based biological assay and cell-based biosensing, and also allow for high-density microscopic analysis with simplified image processing. Supplementary Data To expand our numerical simulations, a variety of numerical experiments for sidewall microgrooves with dimensions other than those shown in Figure 6, 7 and 8 were investigated and the obtained results are presented in this section. The shear stress variation inside the microgroove is shown in supplemental Figures 1 and 2 Figures 1 and 2 that the shear stress is higher in the (a-a) section or the upper part of the microgroove, whereas it decreases in the middle (b-b) and especially the lower (c-c) part of the groove. This is consistent with the results obtained in Figures 6 and 7. Furthermore, the effect of variation of width on the cell penetration for three different geometry setups with R p * =0.625, 1.25 and 1.875 within the sidewall microgroove is presented in supplemental Figure 3. In these simulations, the values of the microchannel width W c * were varied from 1.25 to 4.3. It was observed that by keeping the micropost radius constant and increasing the microgroove width, the cell penetration is deepened. The obtained graphs reveal similar behaviour, as presented in Figure 8. Conflict of interest Authors declare no conflict of interest. No part of this study was performed on any human or animal subjects. Acknowledgements The first author would like to thank Prof. A. Khademhosseini and Dr. B.G. Chung for scientific discussions. Financial supports through NSF-IGERT are greatly appreciated.
2016-09-21T08:51:56.807Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "e7c6c8dbfaa1105558ee5a84f5a0d0c34190e6cb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/60562", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7c6c8dbfaa1105558ee5a84f5a0d0c34190e6cb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
257057404
pes2o/s2orc
v3-fos-license
A computational method for predicting nucleocapsid protein in retroviruses Nucleocapsid protein (NC) in the group-specific antigen (gag) of retrovirus is essential in the interactions of most retroviral gag proteins with RNAs. Computational method to predict NCs would benefit subsequent structure analysis and functional study on them. However, no computational method to predict the exact locations of NCs in retroviruses has been proposed yet. The wide range of length variation of NCs also increases the difficulties. In this paper, a computational method to identify NCs in retroviruses is proposed. All available retrovirus sequences with NC annotations were collected from NCBI. Models based on random forest (RF) and weighted support vector machine (WSVM) were built to predict initiation and termination sites of NCs. Factor analysis scales of generalized amino acid information along with position weight matrix were utilized to generate the feature space. Homology based gene prediction methods were also compared and integrated to bring out better predicting performance. Candidate initiation and termination sites predicted were then combined and screened according to their intervals, decision values and alignment scores. All available gag sequences without NC annotations were scanned with the model to detect putative NCs. Geometric means of sensitivity and specificity generated from prediction of initiation and termination sites under fivefold cross-validation are 0.9900 and 0.9548 respectively. 90.91% of all the collected retrovirus sequences with NC annotations could be predicted totally correct by the model combining WSVM, RF and simple alignment. The composite model performs better than the simplex ones. 235 putative NCs in unannotated gags were detected by the model. Our prediction method performs well on NC recognition and could also be expanded to solve other gene prediction problems, especially those whose training samples have large length variations. www.nature.com/scientificreports/ that could predict the precise locations of NCs' initiation and termination sites has been proposed yet. Besides, length of NCs from different retroviridae genera varies from 48 to 126aa according to records in National Center for Biotechnology Information (NCBI), thus gene prediction methods for genes with conserved lengths are not applicable. Furthermore, classical database search tools [13][14][15] couldn't achieve satisfying results in prediction of retrovirus genes with large length variation 16 . Therefore, there is an urgent need to come up with a computational model for NC prediction. In this paper, computational models to identify NCs from retroviruses were proposed. All available annotated NC sequences in retroviruses were collected for the training and testing process. Position weight matrix (PWM) along with all six parameters of factor analysis scales of generalized amino acid information (FASGAI) 17 were used to generate the feature space for NC prediction. The initiation and termination sites of NCs were separately predicted and combined together afterwards to acquire high prediction accuracy when dealing with sequences that are poorly conserved in their lengths. Their performance was tested by fivefold cross validation test. A composite ab initio model to predict intact NCs from genetic sequences was then proposed. It performs better than the simplex ones. All of the 6651 available gag sequences without NC annotations were scanned with the composite model and 282 putative NCs in them were found. Materials and methods NC collection. All available amino acid sequences of retroviruses with their NCs annotated based on experimental evidences were collected from NCBI at http:// www. ncbi. nlm. nih. gov. There are 77 of such sequences in total. Among them, 4 of them are beta-retrovirus, 13 of them are gamma-retrovirus, 9 of them are from deltaretrovirus, 2 of them are epsilon-retrovirus and the other 49 of them are lentivirus. All these sequences were used for the following training and testing process. All of them are with intact NC structures (please refer to S1 File for details). Separate prediction of NC boundaries. Traditional gene predicting methods could performance well when predicting gene sequences with fixed lengths. However, when it comes to gene sequences with large length variations, such methods may lose effectiveness or even feasibility. This might be because the constant dimension of feature space used in traditional methods couldn't represent features of such genes properly. The lengths of annotated NCs in retroviruses range from 48 to 126aa, so an approach to revise the traditional gene predicting methods to fit the NC predicting problem is needed. Our predicting method focuses more on the border areas adjacent to the start and end of NCs instead of interior areas away from them, for the former contain more effective information for gene prediction and are usually more conservative. The fixed length flanking residues of the initiation site and termination site were predicted to deduce the precise locations of the start and end of NCs. Initiation site and termination site were predicted separately, and the sequence between them were regarded as a candidate NC sequence only when it's length is reasonable. Then the most probable candidate NC was singled out among all candidate NCs to be the final putative NC according to the decision value and alignment score involving it. This technique brings out both feasibility and high accuracy. Sample preparation. Two sets of training samples for initiation and termination sites prediction respectively were built separately. The training samples for initiation sites could be denoted as: Similarly, the training samples for termination sites could be denoted as: Here, I p denotes a positive training sample of initiation site generated from a gag sequence, I n denotes a negative training sample of initiation site. Similarly, T p denotes a positive training sample of termination site and T n denotes a negative training sample of termination site. Init(NC) and Term(NC) represent the true initiation site and termination site of a NC sequence. osi and ost are randomly generated offsets added to initiation and termination site locations respectively to generate negative samples. L is and L ts denote the length of initiation samples and termination samples. We generated the negative sample set with a size 5 times as large as the positive sample set and took the imbalanced sample sets problem into our consideration in the modelling process, to overcome the difficulty of the lack of positive training samples. Feature selection. A hybrid feature space construction approach was proposed by combining position characteristics and physicochemical properties of sequences. (1) I p = gag(i : i + L is − 1) www.nature.com/scientificreports/ Position characteristics. The widely recognized PWM 18 was applied to extract the position characteristic of sequences. By aligning residues starting from initiation sites or ending at termination sites of positive NC sequences, PWMs are defined as follow: Here, f kj stands for the absolute frequency of amino acid k in the jth position of N aligned sequences of length l , j ∈ (1, ..., l) , k is the set of amino acids, b k = 1/|k| (|k|=20 for amino acids, so b k = 0.05). After generating the PWM, the position characteristic of any l-aa-long sequence V was extracted by the following mapping method. Each amino acid of V was assigned with its corresponding value in the matrix according to its position. Then a l-dimension-vector V Pos was generated to represent the position characteristic of the original l-aa-long sequence: where j ∈ (1, ..., l) , k = V j . Physicochemical properties. All 6 parameters of the FASGAI 19 were selected to extract the physicochemical properties of sequences (Please refer to S2 File for details of FASGAI). FASGAI involves hydrophobicity, alpha and turn propensities, bulky properties, compositional characteristics, local flexibility, and electronic properties derived from 335 property parameters of 20-coded amino acids. Thus when dealing with an l-aa-long sequence, the sequence was mapped into a 6 × l matrix to represent its physicochemical properties. After combining the position characteristics and physicochemical properties, a feature space with (1 + 6) × l features in total was established for the l-aa-long sequence. Binary classifiers. In our previous study, three binary classifiers based on different principles were applied to the same feature space to test and compare their predicting abilities: weighted support vector machine (WSVM), random forest (RF) and weighted extreme learning machine (WELM). And we found that the combination of the first two of them could generate the best predicting performance 20 . Prediction models based on WSVM and RF were separately built to predict the initiation and termination sites of NCs. Finding candidate NCs. After the probable NC start and end locations were predicted, a combination method to combine them is required. As there may be several possible NC start and end combination pairs in one unannotated gag sequence, it is necessary to dispose all the less probable putative combinations and leave the most probable one as the final prediction result. The details of such ruling out strategy are shown as follow: Step 1: Keep all the putative NC boundary pairs generated from RF models which have interval distance within the range of NC sequence lengths as candidate NC boundary pairs. For the mth and nth amino acids in a gag sequence, the amino acid pair (m, n) is a candidate NC pair only when it satisfies: where NC min and NC max are the minimum and maximum lengths of annotated NCs respectively, e min and e max are natural numbers and act as the relaxation parameters for the minimum and maximum NC lengths respectively. L is and L ts denotes the length of initiation samples and termination samples.C RFI and C RFT are Boolean variables, their values indicate whether the mth and nth amino acids of gag sequence S are candidate initiation site and termination site respectively according to the prediction results from random forest models. Step 2: Calculate the products of decision values of initiation and termination sites of all candidate NC boundary pairs sorted out in step 1. Then keep the candidate boundary pair with the largest product as the putative NC (A decision value is generated from WSVM models according to the distance of a sample to the classification hyper plane. The prediction result is more likely to be positive when the decision value is larger, vice versa.). Consider amino acid pair (m, n) as a putative NC pair only when it also satisfies: where D WSVMI (S, m, L is ) and D WSVMT (S, n, L ts ) are decision values assigned to the mth and the nth amino acids of gag sequence S after computation of the WSVM models. (m, n) also satisfies the constraints in (6). Combination with homology based method. After the screening process, the putative NCs generated by WSVM & RF models are compared with putative NCs generated from homology based methods. The results are then combined together to enhance the prediction performance. First we introduce a simple alignment (SA). Thus the locations of the putative initiation site and termination site are shown as follow: www.nature.com/scientificreports/ Here max D P = max D WSVMI (S, m, L is ) · D WSVMT (S, n, L ts ) ( m and n also satisfy the constraints shown in (6), max A P = max A I (S, p, L is ) · A T (S, q, L ts ) , subject to Here A I (S, p, L is ) is the maximum alignment score generated from a L is long subsequence starting from the pth amino acid of gag sequence S after comparing it with all the positive training samples of initiation sites. Analogously, A T (S, q, L is ) is the maximum alignment score of a L ts long subsequence ending the qth amino acid of S . The alignment function Align calculates the total number of identical amino acids at the same locations in two sequences with equal length. Since the products of decision values of totally correct boundary pairs are close to 1 but couldn't reach it, while the products of alignment scores have a maximum value of 1, parameter α is introduced to balance the two kinds of maximum products for fair comparisons ( α = 0.95 here). Since the alignment technique here is rather simple, a revision could be done to enhance the performance of combination with homology based method. The widely used bioinformatics tool for sequence searching: Basic Local Alignment Search Tool (BLAST) is used to replace the original simple alignment. Take the unannotated gag sequence S as the query sequence, and take all the positive NC sequences in the training set as the subject sequences. Then the NC sequence that could produce the most significant alignment results indicates the area most likely to be an NC in S . The locations of the putative initiation site and termination site after combination with BLAST (blastp here since the sequences are protein sequences) are shown as follow: subject to Here min B E is the minimum E-value produced by blastp. The subsequence between the pth and the qth amino acid is the corresponding area that produces the minimum E-value. β is the threshold value that determines the selection of prediction results. Performance assessment. fivefold cross-validation was employed to assess the performance of the WSVM and RF models predicting the initiation sites and termination sites in this paper. G − mean under fivefold cross-validation was selected as the major performance evaluation measure. It also provide the basis for parameter selection of models. S n , S p , ACC and MCC were also calculated as a supplemental reference. where true positive ( TP ) and false negative ( FN ) are the number of positive samples that are predicted to be positive and negative respectively. Analogously, true negative ( TN ) and false positive ( FP ) are used to denote the number of negative samples that are predicted to be negative and positive respectively. Among these evaluation measures, G − mean and MCC are better at providing a comprehensive view of the prediction performance, especially with our training dataset which has quantity imbalance between positive and negative data. As with the performance assessment on prediction of entire NC proteins, leave-one-out cross-validation is applied. Each turn we pick out one gag sequence with NC annotation as the testing sequence and leave all others as the source of training samples. Then the above process is repeated until all sequences have been left out for a time as the testing sequence. The reason for not applying fivefold cross-validation here is to rule out random factors as much as possible, since fivefold cross-validation could generate different partition of datasets which may cause fluctuations in prediction performance. Such fluctuations could undermine the cogency of performance comparison between different methods. The prediction accuracy of the initiation sites, termination sites and entire NCs were calculated and compared. Detecting putative NCs in gags. When the NC predicting models are eventually built, the models could be used to search for more putative NCs in unannotated gags. A fixed length sliding window is used to "scan" the unannotated gag sequences to find candidate NC initiation and termination sites. L is and L ts were set to (8) A I (S, p, L is ) = maxAlign(S(p, p + 1, ..., p + L is − 1), I p )/L is A T (S, q, L is ) = maxAlign(S(q − L ts + 1, q − L ts , ..., q), T p )/L ts . www.nature.com/scientificreports/ equate with the length of the sliding window for convenience. The "WSVM & RF + SA" approach is adopted in the detecting process for speed and convenience. When P M is larger than the threshold β , its corresponding candidate NC boundary pair is predicted as a putative NC. Results Predicting Performance of the method. Prediction models based on strategy described above were built. Their effectiveness was also tested and shown below. (Prediction source code is available at SourceForge, with the download URL: https:// sourc eforge. net/ proje cts/ ncpre dicti on/ files/ NCpre dicti on. zip/ downl oad). Accuracy of the prediction of NC initiation and termination sites. The performance of the prediction models aimed at recognizing NC initiation and termination sites based on WSVM and RF were tested by fivefold cross-validation and shown in Table 1. From Table 1, we can find that the G − mean values of initiation sites and termination sites are above 0.9900 and 0.9548 respectively. The MCC values of initiation sites and termination sites are above 0.9735 and 0.9179 respectively. It indicates that both WSVM and RF models could generate satisfying results. Accuracy of the prediction of NC. All of the 77 retrovirus sequences collected with intact NC structures were used to test the predicting performance of different methods. Leave-one-out cross-validation is applied here to rule out random factors. We tested the performance of WSVM & RF, SA, blastp and their different combinations (please refer to S3 File for more details). The accuracy amount and rate of the prediction of initiation site, termination site and entire NC are shown in Table 2. We can find that the combination of machine learning methods and homology based methods could bring about better performance (prediction results of blastp is available at SourceForge, with the download URL: https:// sourc eforge. net/ proje cts/ leave oneou tblas tp/ files/ Blast leave oneout. zip/ downl oad). It is also worth mentioning that the "WSVM & RF + SA" method performs better when there is P M ≥ β ( β = 0.82 here), which indicates that such method is reliable in detecting NCs in unannotated gags. A selfconsistency test was also performed with the "WSVM & RF + SA" method, 90.91% of the NCs could be predicted totally correct, the others are predicted with only slight deviations (please refer to S4 File for more details). Putative NCs. All of the 6041 available unannotated gag sequences were scanned with the "WSVM & RF + SA" model and 235 putative NCs in them were found (please refer to S5 File for more details, the putative NCs are marked in red). Discussion Conservative property of NC boundaries. Motifs of sequences adjacent to origins and terminals of NCs in ERVs were generated based on WebLogo version 2.8.2 (http:// weblo go. berke ley. edu/ logo. cgi) and shown in Fig. 1. From Fig. 1, we can find that sequences adjacent to NC boundaries are quite conservative. This explains why satisfying predicting results could be generated from models built on starts and ends of NC. www.nature.com/scientificreports/ Feature importances. The random forest classifier with its associated gini feature importance, allows for an explicit feature elimination 21 . Thus random forest classifier is utilized to calculate the feature importances of the FASGAI amino acid information. The feature importances of the 6 factors of FASGAI is shown in Fig. 2. Deep learning algorithm. Along with the rapid development of deep learning these years, it is natural to try to use deep learning algorithms such as convolutional neural network (CNN) to solve the prediction of NCs. A CNN model was also built to solve the problem. The optimized model structure is shown in Fig. 3. The model contains only 6 convolutional layers, thus could be considered as a relatively simple CNN. However, the performance of the CNN model is not comparable with the"WSVM & RF + SA" approach, even though its training is much more time consuming. The detailed results were shown in Table 3, from which we could find that Sn rises along with the increase of fold number, while still not comparable with the Sn generated by WSVM or RF (shown in Table 1). The reason of this phenomenon is that deep learning algorithms contains more parameters to be iterated during the training process, but in this case, the sample size is not enough for the sufficient training of the parameters, so the"WSVM & RF + SA" approach suits better. Optimization of model parameters. Model parameters should be optimized until the model could bring out the best predicting performance. As with the WSVM & RF models, we adopted grid search to traverse the parameter space. The parameters that could bring out the highest value of G-mean were considered as the optimized combination of parameters. Since rerunning the model with one set of parameter combination for several times would compensate random factors with each other, another loop was added to the program to rule out arbitrary and capricious behaviours. As with the size of the sliding window in the putative NC detection process, the predicting performance of the "WSVM & RF + SA" method with different window lengths is tested and briefly shown in Table 4 (please refer to S6 File for more details). 16 was found to be an optimized value. www.nature.com/scientificreports/ Evolutionary relationship analyses. Evolutionary analyses were conducted in MEGA7 22 . The evolutionary history was inferred using the Neighbor-Joining method 23 . The evolutionary distances were computed using the Poisson correction method 24 and are in the units of the number of amino acid substitutions per site. A comparison result between homology of NCs within genera and that of inter-genera is given in Fig. 4. It is obvious that NCs in the same genus are more homologous than that from different genera. This is identical with www.nature.com/scientificreports/ expectation and implies that genus-specified NC prediction methods could be brought up in the future to further enhance predicting performance when more annotated NCs are accumulated. Future outlook. The co-evolving information in the protein sequences is also verified to be useful for capturing the characteristics of proteins sequences [25][26][27] . Although these attempts were generally made in the area of protein-protein interactions (PPIs) instead of the prediction of functional elements, their basic idea to utilize coevolving information do provide some enlightenment in the process of feature engineering, which might benefit us in our future research. Moreover, when more annotated NC sequences accumulate, the performance of deep learning algorithms could be improved since there would be enough information for the parameter iteration. Data availability All data generated or analysed during this study are included in this published article (and its Supplementary Information files).
2022-01-12T14:36:50.520Z
2022-01-11T00:00:00.000
{ "year": 2022, "sha1": "a1a70c3e8aeb6c0522fbce0dbe172b415d3bf15d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-03182-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a1a70c3e8aeb6c0522fbce0dbe172b415d3bf15d", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [] }
222239995
pes2o/s2orc
v3-fos-license
Ensemble-based statistical interpolation with Gaussian anamorphosis for the spatial analysis of precipitation Hourly precipitation over a region is often simultaneously simulated by numerical models and observed by multiple data sources. An accurate precipitation representation based on all available information is a valuable result for numerous applications and a critical aspect of climate. Inverse problem theory offers an ideal framework for the combination of observations with a numerical model background. In particular, we have considered a modified ensemble optimal interpolation scheme, that takes into account deficiencies of the background. An additional source of uncertainty for the ensemble background has 5 been included. A data transformation based on Gaussian anamorphosis has been used to optimally exploit the potential of the spatial analysis, given that precipitation is approximated with a gamma distribution and the spatial analysis requires normally distributed variables. For each point, the spatial analysis returns the shape and rate parameters of its gamma distribution. The Ensemble-based Statistical Interpolation scheme with Gaussian AnamorPhosis (EnSI-GAP) is implemented in a way that the covariance matrices are locally stationary and the background error covariance matrix undergoes a localization process. Con10 cepts and methods that are usually found in data assimilation are here applied to spatial analysis, where they have been adapted in an original way to represent precipitation at finer spatial scales than those resolved by the background, at least where the observational network is dense enough. The EnSI-GAP setup requires the specification of a restricted number of parameters and specifically the explicit values of the error variances are not needed, since they are inferred from the available data. The examples of applications presented provide a better understanding of the characteristics of EnSI-GAP. The data sources considered 15 are those typically used at national meteorological services, such as local area models, weather radars and in-situ observations. For this last data source, measurements from both traditional and opportunistic sensors have been considered. Copyright statement. Usage rights are regulated through the Creative Commons Attribution 3.0 License (https://creativecommons.org/ licenses/by/3.0). covariance matrix is derived from numerical model ensemble and where Gaussian anamorphosis is applied directly to precipitation data. An additional innovative part of the method is that EnSI-GAP does not require the explicit specification of error 95 variances for the background or observations, as most of the other methods (Soci et al., 2016). In fact, those error variances are often difficult to estimate in a way that is general enough to cover a wide range of cases. Our approach is to specify the reliability of the background with respect to observations, in such a way that error variances can vary both in time and space. An additional innovative part of our research is that we consider opportunistic sensing networks of the type described by de Vos et al. (2020) within the examples of applications proposed. Thanks to those networks, for some regions we can rely on an 100 extremely dense spatial distribution of in-situ observations. The remaining of the paper is organized as follows. Sec. 2 describes the EnSI-GAP method in a general way, without references to specific data sources. Sec. 3 presents the results of EnSI-GAP applied to three different problems: an idealized situation, then two examples where the method is applied to real data, such as those mentioned above. The results are discussed in Sec. 3.3. 2 Methods: EnSI-GAP, Ensemble-based statistical interpolation with Gaussian anamorphosis for precipitation We assume that the marginal probability density function (PDF) for the hourly precipitation at a point in time follows a gamma distribution (Wilks, 2019). This marginal PDF is characterized through the estimation of the gamma shape and rate for each point and hour. Precipitation fields are regarded as realizations of locally-stationary, trans-Gaussian random fields, where each hour is con-110 sidered independently from the others. Trans-Gaussian random fields are used for the production of precipitation observational gridded datasets by Frei and Isotta (2019). A random field is said to be stationary if the covariance between a pair of points depends only on how far apart they are located from each other. Precipitation totals are nonstationary random fields because the covariance between a pair of points in space depends not only on the distance between them but it varies also when considered in different directions. In our method, precipitation is locally modeled as a stationary random field. The covariance parame-115 ter estimation and spatial analysis are carried out in a moving-window fashion around each grid point. A similar approach is described by Kuusela and Stein (2018) and the elaboration over the grid can be carried out in parallel for several grid points simultaneously. A particular implementation of EnSI-GAP is reported in Algorithm 1. The mathematical notation and the symbols used are described in two tables: Tab. 1 for global variables and Tab. 2 for local variables, which are those variables that vary from 120 point to point. As in the paper by Sakov and Bertino (2011), upper accents have been used to denote local variables. If X is a matrix, X i is its ith column (column vector) and X i,: is its ith row (row vector). The Bayesian statistical method used in our spatial analysis is optimal for Gaussian random fields. Then, a data transformation is applied as a pre-processing step before the spatial analysis. The introduction of a data transformation compels us to inverse transform the predictions of the spatial analysis back into the original space of precipitation values. Algorithm 1 can be divided into three parts, that are described in 125 the next sections: the data transformation in Sec. 2.1, the Bayesian spatial analysis in Sec. 2.2 and the inverse transformation in Sec. 2.3. Data transformation via Gaussian anamorphosis The data transformation chosen is a Gaussian anamorphosis (Bertino et al., 2003), that transforms a random variable following a gamma distribution into a standard Gaussian. This pre-processing strategy has been used in several studies in the past, e.g. Amezcua and Leeuwen (2014); Lien et al. (2013). A visual representation of the transformation process can be found in Fig.1 of the paper by Lien et al. (2013). The hourly precipitation background and observations,X f andỹ o respectively, are transformed into those used in the spatial analysis by means of the Gaussian anamorphosis g(): The gamma shape and rate are derived from the precipitation values by a fitting procedure based on maximum likelihood. The maximum likelihood estimators are calculated iteratively by means of a Newton-Raphson method as described by Wilks (2019), Sec. 4.6.2. In particular, the gamma distribution parameters are fitted to each ensemble member field of precipitation separately. Then, the averaged shape and rate are used in g() for Eqs. (1)-(2). 140 In Gaussian anamorphosis, zero precipitation values must be treated as special cases, as explained by Lien et al. (2013). The solution we adopted is to add a very small amount to zero precipitation values, e.g. 0.0001 mm, then to apply the transformation g() to all values. The same small amount is then subtracted after the inverse transformation. This is a simple but effective solution for spatial analysis, as shown in the example of Sec. 3.1.1. In principle, the statistical interpolation is sensitive to the small amount chosen, such that using 0.01 mm instead of 0.0001 mm will return slightly different analysis values in the 145 transition between precipitation and no-precipitation. In practice, we have tested it and we found negligible differences when values smaller than e.g. 0.05 mm (half of the precision of a standard rain gauge measurement) have been used. Spatial analysis The spatial analysis inside Algorithm 1 has been divided into three parts. In Sec. 2.2.1, global variables have been defined. Then, as stated in the introduction of Sec. 2, the analysis procedure is performed on a gridpoint-by-gridpoint basis. In Sections 2.2.2-150 2.2.3, the procedure applied at the generic ith gridpoint is described. In Sec. 2.2.2, the specification of the local error covariance matrices is described. In Sec. 2.2.3, the standard analysis procedure is presented together with the treatment of a special case. Definitions In Bayesian statistics, according to Savage (1972), a state is "a description of the world, which is the object which we are concerned, leaving no relevant aspect undescribed" and "the true state is the state that does in fact obtain". The object of our 155 study is the hourly precipitation field x(), that is the hourly total precipitation amount over a continuous surface covering a 5 https://doi.org/10.5194/npg-2020-20 Preprint. Discussion started: 19 June 2020 c Author(s) 2020. CC BY 4.0 License. spatial domain in terrain-following coordinates r. Our state is the discretization over a regular grid of this continuous field. The true state (our "truth", x t ) at the ith grid point is the areal average: where V i is a region surrounding the ith grid point. The size of V i determines the effective resolution of x t at the ith grid point. 160 Our aim is to represent the truth with the smallest possible V i . The effective resolution of the truth will inevitably vary across the domain. In observation-void regions, the effective resolution will be the same as that of the numerical model used as the background, then approximately o(10 − 100 km 2 ) for high-resolution local area models (Müller et al., 2017). In observationdense regions, the effective resolution should be comparable to the average distance between observation locations, with the model resolution as the upper bound. 165 The analysis is the best estimate of the truth, in the sense that it is the linear, unbiased estimator with the minimum error variance. The analysis is defined as x a = x t + η a , where the column vector of the analysis error at grid points is a random variable following a multivariate normal distribution η a ∼ N (0, P a ). The marginal distribution of the analysis at the ith grid point is a normal random variable and our statistical interpolation scheme returns its mean value x a i and its standard deviation σ a i = P a ii . 170 As for linear filtering theory (Jazwinski, 2007), the analysis is obtained as a linear combination of the background (a priori information) and the observations. The background is written as x b = x t +η b , where the background error is a random variable The background PDF is determined mostly, but not exclusively, by the forecast ensemble, as described in Sec. 2.2.1. The forecast ensemble mean is x f = k −1 X f 1, where 1 is the m-vector with all elements equal to 1. The background expected value is set to the forecast ensemble mean, plays a role in the determination of P b , as defined in Sec. 2.2.2. The p observations are written as y o = Hx t + ε o , where the observation error is ε o ∼ N (0, R) and H is the observation operator, that we consider as a linear function mapping R m onto R p . 180 Specification of the observation and background error covariance matrices Our definitions of the error covariance matrices follow from a few working assumptions, WAn indicates the nth working assumption and the abbreviations will be used in the text. WA1, background and observation uncertainties are weather-and location-dependent. WA2, the background is more uncertain where either the forecast is more uncertain or observations and forecasts disagree the most. WA3, observations are a more accurate estimate of the true state than the background. We 185 want to specify how much more we trust the observations than the background in a simple way, such as e.g. "we trust the observations twice as much as the background". WA4, the local observation density must be used optimally to ensure a higher effective resolution, as it has been defined in Sec. 2.2.1, where more observations are available. WA5, the spatial analysis at a particular hour does not require the explicit knowledge of observations and forecasts at any other hour. However, constants in the covariance matrices can be set depending on the history of deviations between observations and forecasts. WA5 makes the 190 procedure more robust and easier to implement in real-time operational applications. A distinctive feature of our spatial analysis method is that the background error covariance matrix i P b is specified as the sum of two parts: a dynamical component and a static component. The dynamical part introduces nonstationarity, while the static part describes covariance stationary random variables. This choice follows from WA1 and it has been inspired by hybrid data assimilation methods (Carrassi et al., 2018). The dynamical component of the background error covariance matrix is obtained 195 from the forecast ensemble. Because the ensemble has a limited size, and often the number of members is quite small (order of tenths of member), a straightforward calculation of the background covariance matrix will include spurious correlations between distant points. Localization is a technique applied in DA to fix this issue (Greybush et al., 2011). The static component has also been introduced to remedy the shortcomings of using numerical weather prediction as the background. There are deviations between observations and forecasts that cannot be explained by the forecast ensemble. A typical example is when 200 all the ensemble members predict no precipitation but rainfall is observed. In those cases, we trust observations, as stated through WA3. Then, the static component adds noise to the model-derived background error, as in the paper by Raanes et al. (2015). In Bocquet et al. (2015), the static component is referred to as a scale matrix, since it is used to scale the noise component of the model error, and we adopt the same term here. We will also refer to this matrix, and its related quantities, with the letter u to emphasize that this component of the background error is "unexplained" by the forecast. 205 i P b is written as: The first component on the right-hand side of Eq. (5) is the dynamical part. P f is the forecast uncertainty of Eq. (4), i Γ is the localization matrix and • is the Schur product symbol. The localization technique we apply is a combination of local analysis and covariance localization, as they have been defined by Sakov and Bertino (2011). In the local analysis, only the functions have been described for instance by Gaspari and Cohn (1999). We have chosen not to inflate or deflate P f directly and to modulate the amplitude of background covariances only through the terms of Eq. (5), this way we reduce the number of parameters that need to be specified. As a matter of fact, for the combination of observations and background in the analysis 220 procedure, the m by m covariance matrices are never directly used. Instead, the matrices used are: the covariances between f ≥ i σ 2 ob /(1 + ε 2 ) and when the forecast ensemble spread is too small, then Eq. (11) ensures that Eq. (10) is valid by setting Analysis procedure The expressions for the analysis and its error variance are direct results of the linear filter theory (Jazwinski, 2007) variance is derived from the error covariance matrices: Eq. (13)-(14) are also typical of Optimal Interpolation and the formulation used is similar to the one adopted by Uboldi et al. 275 (2008), which follows from Ide et al. (1997). Because an ensemble of fields is used to determine the background error, the method is similar to Ensemble OI (Evensen, 1994). The special case of i σ 2 f = 0 and i σ 2 ob /(1 + ε 2 ) = 0 still needs to be considered, since it does not belong to the cases of Eqs. (11)-(12). This is the case when both observations and forecast ensemble means at observation locations have exactly the same values and, in addition, all the ensemble members have the same values too. In practice, this might happen only when no 280 precipitation is observed and forecasted at the same time. Since the innovation is a vector of zeros, from Eq. (13) it follows that Because all the information available shows an exceptional level of agreement, we have chosen to set the analysis error variance to zero, such that for those points the analysis PDFs are Dirac's delta functions. Data inverse transformation The direct inverse transformation at the ith grid point is written as: however, we need to back-transform a Gaussian PDF and not a scalar value. Eq. (15) returns an estimate of the median of the gamma distribution associated to the ith grid point. Our goal is to obtain the gamma shape and rate. To achieve that, the direct inverse transformation g −1 is applied to 400 quantiles of the univariate gaussian PDF defined byx a i and (σ 2 ) a i , a similar approach is used by Erdin et al. (2012). Then, a least-mean-square optimization procedure is used to obtain the optimal shape 290 and rate that better fits the back-transformed quantiles. Given the optimal shape and rate, it is possible to obtain the statistics that better represent the distribution for a specific application. In Sec. 3, the analysis value chosen is the mean as it is the value that minimizes the spread of the variance. However, other choices may be more convenient depending on the applications, as discussed by Fletcher and Zupanski (2006) where, for instance, the mode was chosen as the best estimate. In Sec. 3, we will also consider selected quantiles of the gamma distribution to represent analysis uncertainty. Results EnSI-GAP is applied to two case studies in Sec. 3.1, one on synthetic and one on real-world data. In addition, its performances are evaluated over several months in Sec. 3.2. Then, a discussion of the results is presented in Sec. 3.3. In Sec. 3.1.1, EnSI-GAP is applied over a one-dimensional grid and in a "controlled environment", that is over synthetic data specifically generated for testing EnSI-GAP on precipitation. The spatial analysis is performed with and without data 300 transformation to assess its effects. Furthermore, we have compared different ways of specifying the scale matrix and we have investigated the sensitivity of EnSI-GAP to variations of α and ε 2 . In Sec. 3.1.2, a second more realistic example of application for EnSI-GAP is reported, where the spatial analysis is performed for a case study of intense precipitation, which happened in July 2019 over South Norway. The data are those used in the operational daily routine at MET Norway. The forecasts are from the MetCoOp Ensemble Prediction System (MEPS, 305 Frogner et al., 2019). MEPS has been running operationally four times a day (00 UTC, 06 UTC, 12 UTC, 18 UTC) since November 2016 and its ensemble consists of 10 members. The hourly precipitation fields are available over a regular grid of 2.5 km. The observational dataset of hourly precipitation is composed of two data sources: precipitation estimates derived from the composite of MET Norway's weather radar and meteorological weather stations equipped with ombrometers, such as rain gauges or other devices. The hourly precipitation in-situ observations have been retrieved from MET Norway's climate 310 database frost.met.no (last time the website was checked is 2020-05-13). In addition to MET Norway's official weather stations, the database includes data collected by several Norwegian public institutions, such as for example: universities (e.g., the Norwegian Institute of Bioeconomy Research -Nibio), the Norwegian Water Resources and Energy Directorate (NVE), the Norwegian Public Roads Administration (Statens vegvesen). As described in the recent paper by Nipen et al. (2020), MET Norway is successfully integrating amateur weather stations temperature data into its operational routine. In this study, hourly The domain, data sources and grid settings of the cross-validation experiments are the same as for the case study of intense precipitation of Sec. 3.1.2. We are aiming at evaluating EnSI-GAP from both a deterministic and probabilistic viewpoint. The verification scores considered are commonly used in forecast verification and described in several books, such as for example Jolliffe and Stephenson (2012). A further useful reference for the scores is the website of the World Meteorological 325 Organization https://www.wmo.int/pages/prog/arep/wwrp/new/jwgfvr.html (last time checked 2020-05-13). One-dimensional simulations EnSI-GAP is shown over a one-dimensional grid with 400 points and spacing of 1 spatial unit, or 1 u, such that the domain covers the region from 0.5 u to 400.5 u and the generic ith grid points is placed at the coordinate i u. A simulation begins 330 with the creation of a true state, then observations and ensemble background are derived from it. The statistical interpolation scheme is applied with different configurations in order to illustrate the behaviour of the method. The simulation presented here is shown in Fig. 1a. For each grid point, the true value (black line) is generated by a random extraction from the gamma distribution with shape and rate set to 0.2 and 0.1, respectively. To ensure spatial continuity of the truth, an anamorphism is used to link a 400-dimensional multivariate normal (MVN) vector with the gamma distribution. The 335 samples from the MVN distribution, with a prescribed continuous spatial structure, are obtained as described by Wilks (2019), chapter 12.4. The MVN mean is a vector with 400 components all set to zero and the covariance matrix is determined using a Gaussian covariance function with 10 u as the reference length used for scaling distances. The effective resolution (Sec. 2.2.1) of the truth is then 10 u. The ensemble background (gray lines in Fig. 1a) on the grid is obtained by perturbing the truth. The background values at 340 observation locations are obtained using nearest neighbour interpolations applied to the ensemble members. The observation operator H (Sec. 2.2.1) is the nearest neighbour interpolation, that is a matrix with all zeros except for a single element on each line equal to 1 at the element corresponding to the closest grid point. The truth is perturbed considering four main error sources, which are typically found in precipitation fields simulated by numerical models. The first typical error is the misplacement of precipitation events, that is implemented here independently for each ensemble member by shifting the true values along the 345 grid by a random number between -10 u and +10 u. Second, the effective resolutions of the background members are set to be coarser than the truth. For each member, the coarser resolution is obtained by multiplying the true values by a coefficient derived from a uniform distribution with values between 0.05 and 2 and a spatial structure function given by a MVN with Gaussian covariance function with a reference length greater than that of the truth, which is 10 u. The exact reference length scale varies between each member as it is extracted from a Gaussian distribution with mean of 50 u and a standard deviation 350 of 5 u. Third, as previously stated in Sec. 2, the challenging special case of the background showing no-precipitation while the true state reports precipitation is considered. For this purpose, all the ensemble members for the grid points between 200 u and 300 u are set to 0 mm. The fourth typical error is a variation of the third one, the background at grid points between 50 u and 150 u follows an alternative truth but different from 0 mm. It is possible to recognize the four regions described above on the grid in Fig. 1 by means of the coordinate on the x-axis. Because we had to ensure continuity of the background, we have 355 enforced smooth transitions between the regions mentioned above and their surroundings. The number of observations (blue dots in Fig. 1a) is set to 40. The observation locations are randomly chosen, such that the observation density is variable across the grid. The observed value at a location is obtained as the true value of the nearest grid point, plus a random noise that is determined as a random number between -0.02 and 0.02 that multiplies the true value. The procedure is consistent with the fact that observation precipitation errors should follow a multiplicative model (Tian et al.,360 2013). The observation distribution is denser in the central part of the domain and sparser closer to the borders. The constraints on the number of observations are: 5 between 1 u and 100 u; 30 between 101 u and 300 u; 5 between the 301 u and 400 u. The panel d in Fig. 1 shows the Integral Data Influence (IDI Uboldi et al., 2008), that is a parameter that stay close to 1 for observation-dense regions, while it is exactly equal to 0 in observation-void regions. In practice, the IDI at the ith grid point is computed here as the analysis in Eq. (13) when all the observations are set to 1 and the background to 0, moreover only the Where the IDI is close to zero, the analysis is as good as the background. The simulation has been configured such that the domain is well covered by the observations. In this way, it can provide insights on the combination process itself. As it follows from Eq. (13) and Eq. (5), IDI depends on the error covariance matrices. We have used the equations of Algorithm 1, then the only additional parameter we have to set is D i . In this example, D i is estimated as the distance between the ith grid point and its closest 3rd observation location. This way, IDI stays close to 370 1, meaning that the observations do have an impact on the analysis, even for observation-sparse regions. D i is shown in Fig. 1c and it has been constrained to vary between 5 u and 20 u. More details on how to set D i are discussed in Sec. 3.3. The effect of the Gaussian anamorphism is shown in Fig. 1b. The transformed precipitation varies within a smaller range than the original precipitation, thus effectively shortening the tail of the distribution, reducing its skewness and making it more similar to a Gaussian distribution. 375 The transformed precipitation analysis is shown in Fig. 2 for different configurations of the statistical interpolation scheme. The localization matrix i Γ, of Eq. (5), for all panels is specified using Gaussian functions, of the form of those used in Algorithm 1 for i Z and i V, with L i constant and set to 25 u for all the grid points. The value of ε 2 is set to 0.1, which means that we trust much more the observations than the background. The parameters that are allowed to vary are those determining the scale matrix. As stated above, D i is determined adaptively at each grid point as shown in Fig. 1c. α is set to two different values. In 380 12 https://doi.org/10.5194/npg-2020-20 Preprint. Discussion started: 19 June 2020 c Author(s) 2020. CC BY 4.0 License. the left column (panels a and c) α = 0.1, while in the right column (panels b and d) α = 1. The background uncertainty is a trade off between the ensemble spread and the averaged innovation, with α = 1 the weight of each component is determined by the simulation presented in Fig. 1, without considering that is just one possible realization of the ensemble that should be used to derive robust parameter estimates. In the case of α = 0.1, the scale matrix is multiplied by a smaller value of i σ 2 u , as can be seen from Eq. (11). The panels in the top row (panels a and b) are obtained using a Gaussian function for i Γ u , as in Algorithm 1, 385 while the bottom row (panels c and d) shows the analyses when an exponential function is used. The exponential correlation function for the spatial analysis of precipitation is used for example by Mahfouf et al. (2007); Lespinas et al. (2015); Soci et al. (2016). In Fig. 2, the analysis mean (or expected value, red line) fits the observations and the observed value is often within the analysis PDF envelope shown (pink region), despite the background deficiencies. The same comments hold true also when 390 ε 2 = 1 (not shown here), with an increase of the analysis residuals (analysis minus observation) and a slightly larger spread of the analysis PDF. The analysis means, in all panels, follow the observed values closely and variations in the scale matrix seem to have limited effects on the analysis means, at least in regions with a dense observational network, such as between 100 u and 300 u. The analysis ensemble spread is more sensitive to variations in the scale matrix. In the case of α = 1, the spread is larger than for α = 0.1 because of the increased background uncertainty. In all panels, the analysis presents the higher 395 uncertainties between 50 u and 100 u, where observations are sparse and the background follows an alternative truth. The analysis uncertainty is large also between 200 u and 300 u, where the background ensemble is set to no-precipitation while the rather dense observational network reports yes-precipitation. Interestingly enough, the region between 200 u and 300 u is also where the analyses derived from Gaussian and exponential functions differ the most. In this region, the ensemble-dependent with the data transformation, the spread associated to the analysis PDF is proportional to the observed value, moreover the skewness of the PDF allows for values higher than the mean to happen with a higher probability than in the case of no data transformation. Those two characteristics are positive achievements of EnSI-GAP, since they are typical of a gamma PDF that is regarded as a good statistical model for precipitation uncertainties (Wilks, 2019). The impact of the data transformation seems to be less significant over the analysis means, which, given the EnSI-GAP settings used, are principally constrained to fit 420 the observations. The panels a-d of Fig. 3 illustrate how the analysis uncertainty is modified when passing from the transformed to the original space. The regions with the higher uncertainties in Figs. 2-3 coincide, except for the noticeable difference of the points between 250 u and 300 u. In Fig. 2, the analysis PDFs for all those grid points show Gaussian functions with some spread. However, the same points in Fig. 3 show a much narrower analysis PDF, often without any significant spread even though we are not in the special case of For all panels, α = 1, L = 25 u and Gaussian covariance functions are used for localization and the specification of the scale matrix. D i is the same shown in Fig. 1c. The effect on the error variances of using a data transformation against not using it is investigated here. EnSI-GAP variances are shown in panels a and b, with ε 2 = 0.1 and ε 2 = 1 respectively. When ε 2 = 1, i σ 2 b is 435 equal to i σ 2 o everywhere but their values vary along the one dimensional grid. The scheme without data transformation is shown in panels c and d for the same ε 2 values. As a consequence of the data transformation, the scales on the y-axis differ between panels a-b and c-d. For all panels, the analysis PDF spread is higher between 50 u and 100 u, as can be seen e.g. in Fig. 3b that correspond to Fig. 4a. When the data transformation is applied, i σ 2 b reaches its maximum between 200 u and 300 u, with a second maximum between 50 u and 105 u. In the case without data transformation, the two maxima of i σ 2 b are still in the 440 same regions, though the principal one is between 50 u and 150 u. Besides, when using a data transformation the two maxima do have rather similar values, while without using data transformations the highest maximum can be four times the smallest one (Fig. 4d). In all panels, i σ 2 u is different from zero in the region between 180 u and 300 u. In addition, for ε 2 = 0.1, i σ 2 u is different from zero also in the region between 50 u and 150 u because the variation of ε 2 = 0.1 modifies the threshold value of i σ 2 ob /(1 + ε 2 ) in Eqs. (11)-(12). Then, one interpretation is that between 200 u and 300 u, when the background fails to predict 445 the occurrence of precipitation, all spatial analysis scheme recognize the importance of using the additional term modulated by i σ 2 u for the specification of the background error. On the other hand, for the type of background failure taking place between 50 u and 150 u, that is the background is following an alternative truth with respect to observations, the behaviour of the spatial analysis scheme is less predictable and it is more sensitive to the EnSI-GAP settings. Furthermore, this example indicates that a data transformation is useful to keep the variances at reasonable values. Intense precipitation case over South Norway A mass of moist air from the ocean moving towards the Norwegian mountains was at the origin of several intense showers over western Norway on the 30th of July 2019. South Norway is the domain under consideration and it is shown in Fig. 5, the domain measures 373 km in the meridional direction and 500 km in the zonal direction. The measurements from the weather stations managed by MET Norway show hourly precipitation values with more than 20 mm, which is extremely intense given 455 the climatology of the region. In addition, thousands of lightning strikes have been recorded (not shown here), thus confirming the convective nature of the precipitation. Intense events have been observed in the afternoon along the coast and over the nearby mountains, especially in Sogn og Fjordane. This region is shown as the black box in Fig. 5, it extends for 80 km in both meridional and zonal directions. Point A corresponds to the center of a grid box where a maximum of precipitation has been observed. Point B is the center of a grid box that is not covered by observations and where a maximum of precipitation has 460 been reconstructed by the analysis. The distance between points A and B is 14 km. In Sogn og Fjordane damages have been reported, they were caused by the heavy rain that also triggered a series of landslides. One of them caused a fatality when a driver was caught in the debris flow. On both domains, the focus is on the representation of hourly precipitation patterns at the mesoscale, as it has been defined by Thunis and Bornstein (1996); Stull (1988), though over different domains we will focus on different parts of the mesoscale. 465 South Norway is used to show that the variability of the fields represented by the forecast ensemble members involves mostly the Meso-β part of the mesoscale (i.e. spatial scales from 20 km to 200 km). Sogn og Fjordane is a domain where highresolution information is needed to support fine-scale analysis by e.g. civil protection authorities. In this case, we will study precipitation patterns at the Meso-γ scale (i.e. from 2 km to 20 km). The EnSI-GAP Algorithm 1 has been used over a grid with 2.5 km of spacing, which is the resolution of the MEPS grid (see Sec. 3). The parameters are ε 2 = 0.1, α = 0.1, 470 p mx = 200, L i = 50 km constant, D i is estimated adaptively on the grid as the distance between the grid point and the 10th closest observation location with upper and lower bounds of 3 km and 10 km, respectively. The EnSI-GAP settings are such that the analyses would stay much closer to the observations than to the forecasts, where observations are available. The analysis uncertainty will reflect locally both the forecast ensemble spread and the averaged innovation, but the weight of this last component is damped by α. The two parameters p mx and D i are used to limit the number of observations that can influence 475 the analysis at a grid point. The localization parameter L i is set to a rather large value, such that the dynamics of the forecasts ensemble are evident in the results. The observation error covariance matrix of Eq. (6) is defined with a diagonal i Γ o , that is a situation where radar-derived and in-situ observations are assumed to have the same precision, moreover we are ignoring the spatial correlation of radar-derived observation errors. An investigation of spatially correlated radar-derived observation errors is outside the scope of this study. Note that those settings are useful for the illustration of the method, while for operational 480 applications other settings may be more appropriate, such as a smaller value of L i or a more sophisticated characterization of the observation errors, for example. Figure 6 shows the hourly precipitation data for 2019-07-30 15:00 UTC over South Norway. The observational data are shown in panel a. For each grid box, the average of radar-derived precipitation and in-situ measurements within that box is Grid points that are not covered by observations are marked in gray. In panel b, the background ensemble mean derived from a 10-member ensemble forecast is shown, while six of the ten ensemble members are shown in Fig. 7. The 10-member ensemble shows realistic precipitation fields, moreover they are rather similar, at least in terms of weather situation at the Meso-β scale. Weather forecasters can be quite confident in stating that heavy precipitation is likely to occur over western and southern Norway, while is less likely over eastern Norway. The forecast uncertainty is large enough that it is difficult to predict exactly 490 which subregion will be affected by the most intense showers. The observations confirm that showers occur along the coast of western Norway and that the most intense precipitation event is located in Sogn og Fjordane, that is the black box in Fig. 5. Note that approximately half of the box is not covered by observations. Panel c of Fig. 6 shows the analysis, specifically the analysis mean at each grid point. In this case, the spatial analysis acts almost as a "gap filling" procedure to fill in empty spaces in between observations with the most likely precipitation values. As prescribed by our EnSI-GAP settings, the analyses over 495 observation-dense regions are not that different from the observed values. It is interesting to have a look at the results over Sogn og Fjordane and to focus on the Meso-γ scale. The evolution in time of the hourly precipitation fields is shown in Fig. 8 for observations, background and analysis at three different hours, as clearly indicated in the figure. The two crosses mark the locations of points A and B (see Fig. 5). As stated above, point A is in a densely observed part of the domain, while point B is almost in the middle of the observation-void region and the closest 500 observations are located at a distance of approximately 10 km. As a consequence, in the EnSI-GAP algorithm, D i at point A is closer to 3 km, while at point B is closer to 10 km. The observed fields show a large variability over short distances and the difference between two adjacent points can be as large as 30 mm/h. The background is smoother than the observed field and shows scattered showers for 14:00 UTC and 15:00 UTC, then a wider precipitation cell almost centered over point B is shown at 17:00 UTC. At 14:00 UTC, the observed value at point A (from radar-derived estimates) is over 30 mm/h and it is 505 evident a sharp gradient from south-west to north-east. The gradient is so intense that the nearby points south-west of point A, only 3 km apart, shows almost no precipitation. The background indicates that a maximum of the field can occur between point A and B. The analysis matches the observations, though smoothing out their spatial variability, such that at point A the analysis value is less than 10 mm/h, that is representative of the areal average of the nearby points. A precipitation maximum of more than 30 mm/h has been reconstructed in the analysis between points A and B, that is consistent with the gradient in the 510 observations and the pattern in the background. The situation at 15:00 UTC in the observations around point A is a bit different. The radar-estimated precipitation is again over 30 mm/h but there are several grid points in the surroundings of point A with similar values, such that the local gradient of the field is less steep and it shows a decrease of precipitation east of point A. The background also shows that it is more likely to find intense precipitation immediately to the west of point A than to the east. In general, EnSI-GAP forces the analysis to follow more closely the observations than the background and the analysis uncertainty is smaller than those of the background. As a consequence, the timing of the precipitation onset is also better represented in the analysis. At point A, the PDF of the precipitation analysis 530 between 10:00 UTC and 13:00 UTC is a Dirac's delta function and it has the value of 0 mm/h. From 14:00 UTC onward, the analysis PDF is a gamma. During this period, on average the interquartile range (i.e., the difference between the 75th and the 25th percentiles) is 20% of the analysis value, the difference between the 90th and the 10th percentiles is 38% of the analysis and the difference between the 99th and the 1st percentiles is 70% of the analysis. From 14:00 UTC to 23:00 UTC, the observed values are within the analysis envelopes shown in Fig. 9 for 50% of the hours, that is a consistent improvement compared to 535 the background. For the other 50% of the hours, the observed values lie outside the envelopes and 14:00 UTC and 19:00 UTC are the two hours when the deviations between observations and analyses are the most evident. For those two hours, the local variability of the precipitation field is extremely large, as shown in Fig. 8 for 14:00 UTC, and the observed values at point A are sort of outliers, if compared to their neighbours. The spatial analysis finds the best estimates of true values, which are areal averages, as discussed in Sec. 2.2.1 and defined in Eq. (3), with spatial supports determined by the EnSI-GAP settings. At 540 14:00 UTC and 19:00 UTC, the representativeness errors of the observations at point A are particularly large with respect to the spatial supports of the true values, such that the corresponding observations get "filtered out" by the analysis and their values are unlikely to occur according to the analysis PDF. Note that by fine-tuning the EnSI-GAP settings, the analysis PDF can be modified such that the analysis spread would become larger, which in this case would correspond to a reduction of the spatial support for the true values, and the analysis envelope would be more likely to include the observations. The trade-off between 545 e.g. accuracy and precision of the analysis at a point ultimately depends on the objective of an application. With respect to the precipitation yes/no distinction, from 14:00 UTC to 23:00 UTC, the analysis clearly shows that precipitation is occurring at the point, while the background is more uncertain. At point B, the analysis uncertainties between 10:00 UTC and 12:00 UTC is so small that the analysis PDF is a Dirac's delta function with 0 mm/h, despite there are no observations exactly located at that point. From 13:00 UTC onward, the analysis follows a gamma PDF and the spread is wider at point B that at point A. On 550 average, the interquartile range is 82% of the analysis value, the difference between the 90th and the 10th percentiles and the 99th and the 1st percentiles are both higher than 100%. However, the values depend on the weather situation, for example at the precipitation peak of 32 mm/h at 17:00 UTC, the interquartile range is 35% of the analysis value and the difference between the 90th and the 10th percentiles is 68% of the analysis. The increase in the analysis spread at point B compared to point remarkable that even for observational dense regions, such as at point A, the analysis spread remains quite large. This is due to the large variability of the observations at the Micro-scale, which by definition includes spatial scales smaller than the Meso-γ. With respect to the EnSI-GAP theory, this large variability is interpreted as a large observation representativeness error. A very dense observational network, that is with observations that are closer than the effective resolution of the background, has two effects on EnSI-GAP: (i) it improves the accuracy of the analysis; (ii) it may increase the analysis uncertainty, such that the tails 560 are more representative of the extremes that occur in the observational dense region, possibly on scales that are not properly resolved by the analysis grid. One of the main innovation of EnSI-GAP compared to traditional spatial analysis methods (Hofstra et al., 2008) is the specification of anisotropic background error covariances between grid points through non-stationary covariance matrices. Two visual representations of the correlations associated with those covariances are shown in Fig 10 for points A and B. The corre-565 lations are shown instead of the covariances because we are interested in the shape of the covariance patterns and correlation is a quantity which is then more correct to compare between the two points. The domain considered is Sogn og Fjordane (see Fig. 5) and the covariances are computed over the same 2.5 km grid used in Fig. 8. Then, for visualization purposes, the correlations have been downscaled over a finer resolution grid to highlight asymmetries. The rather large localization length L i = 50 km allows for the dynamics of the forecast ensemble to be visible in the analysis. The corresponding observations, In Fig. 10, the background error correlations between the points A and B and their surroundings within Sogn og Fjordane 575 are displayed. The closest two hundreds observations are shown with different symbols, depending on the rain occurrence. The two points are only 14 km apart, nonetheless the two maps in Fig. 10 are rather different. For point A, the correlation extends westward, while it decays faster moving eastward. The area where the correlation is higher than 0.6 is confined within approximately 5 km in any direction from point A. In the corresponding analysis in Fig. 8, the shape of the area with precipitation rate higher than 30 mm/h looks like the pattern of correlation higher than 0.6 of Fig. 10. At point B, the 580 correlations tend to be more isotropic than for point A and it looks like the correlation extends more eastward than westward. The observations 20 km north-east of point B, that are reporting no precipitation, do have correlations with point B that are comparable to those of the observations at 10 km west of it. The analysis at point B, as shown in Fig. 8, takes into account those correlations and the predicted values in the region between points A and B decrease gradually when moving from the west to the east. Instead, because of the expected better quality of those measurements, they have been reserved as independent observations for verification. This cross-validation strategy is widely used in atmospheric sciences (Wilks, 2019). Validation over South Norway through cross-validation experiments As for Sec. 3.1.2, the EnSI-GAP Algorithm 1 has been used. For this application, the spatial analysis predicts values at those station locations used for cross-validation. Some of the parameters are kept fixed, while others are allowed to vary. The fixed 595 parameters are: p mx = 200, L i = 50 km. Then, as in Sec. 3.1.2, D i is estimated adaptively at each location as the distance between that point and the 10th closest observation location with upper and lower bounds of 3 km and 10 km, respectively. The parameters that are allowed to vary and that are the objective of the sensitivity analysis that follows are: ε 2 and α. There is an important difference here, in this example the radar-derived estimates are assumed to be less precise than the in-situ observations but more precise then the background. The in-situ observations are assumed to be ten times more precise than 600 the background, then ε 2 is set to 0.1 as in Sec. 3.1.2. However, the radar-derived observations are assumed to be only two times more precise than the background, or in other words they are five times less precise than the in-situ observations, and the elements of the diagonal matrix the "yes" event for either observations or predictions is that the corresponding value must be higher than the precipitation threshold specified on the x-axis. For all predictions, it is more likely that a predicted "yes" event corresponds to an observed "yes" event for smaller thresholds than for the higher ones. The added value of the analysis over the background is evident for all configurations. The two configurations with α = 1 present similar ETS curves, though the one with ε 2 = 0.1 performs better. The same holds true when α = 0.1, though in this case the ETS is more sensitive to variations in ε 2 and the analysis 635 performance decreases faster with the increase of ε 2 . The relative skill of the analysis probabilistic predictions over the background, used as reference, is shown by means of the Brier Skill Score (BSS) in Fig. 13. As for the ETS, also the BSS is shown as a function of the threshold used to define a "yes" event. The BSS is higher for light precipitation, then it gradually decays with the increase of the precipitation rate. The analyses with α = 1 show better performances over a wider range of amounts than those obtained with α = 0.1. 640 The reliability diagrams plotting the observed frequency against the predicted background and analysis probabilities are shown in Fig. 14 The analyses tend to cross the diagonal, which means that they are overpredicting some probabilities and underpredicting other probabilities. When α = 1, the analyses are underpredicting low probabilities and overpredicting high probabilities. The analysis configuration with α = 1 and ε 2 = 0.1 stays rather close to the diagonal, 650 overpredicting low probabilities and underpredicting high probabilities. Discussion For the presented implementation of EnSI-GAP, the Gaussian anamorphosis g() of Eq. (16) is based on the same gamma distribution parameters for the whole domain. This assumption might be too restrictive for very large domains, such as for all Europe for instance. In this case, different solutions may be explored such as slowly varying the gamma parameters in space, are not too large. The EnSI-GAP implementation in Algorithm 1 requires the specification of four parameters: D, the length scale of the scale matrix; L, the localization length scale, which is governing the covariance suppression rate with the distance; α, the stabilization factor; ε 2 , used to determine the error variances. EnSI-GAP is designed such that the four parameters are location dependent. It is important to avoid abrupt variations in space of these parameters, otherwise the analysis field will show unrealistic patterns D depends on the observation spatial distribution, because it is used to ensure that the scale matrix is based on the interaction of some observations and not only just one or two. If we assume that it is reasonable to use the observational network to refine the 680 effective resolution of the background, then we can imagine that L should be larger than D. ε 2 varies between 0.1 and 1, for the reasons discussed in Sec. 2.2.2. In Sec. 3.2, we have also specified the observation errors as a function of the data source: radar data has a different quality than in-situ. It should also be possible to setup the analysis such that citizen observations have a different quality than observations measured by professional stations. We have assumed that the background ensemble is more likely to overestimate the spread than to underestimate it, for this reason we have assumed values of α between 0.1 and 1. Note that there is a further parameter that can be specified in EnSI-GAP, that is p mx , which is used to keep the analysis procedure local. However, p mx has not been considered among the four parameters optimized in Secs. 3.1.1-3.2. Instead, it has been set to 200, that is a large number for our applications. If p mx is too small, the analysis field will include discontinuities due to the sudden influence of observations that have been given significantly different weights in the analysis at a grid point with respect to the analyses at its neighbouring grid points. p mx is important for operational application over computational systems with 690 limited resources, because it can be regarded as a parameter that limits the use of computational resources within a predefined range. Conclusions The ensemble-based statistical interpolation with Gaussian anamorphosis (EnSI-GAP) applies inverse problem theory to the spatial analysis of hourly precipitation. Numerical model output provides the prior information, and specifically we have 695 considered ensemble forecasts, that have been combined with radar-derived estimates and in-situ observations. EnSI-GAP has been applied on datasets that are typically available within national meteorological services. In addition, opportunistic sensing networks based on citizen observations have been considered. The precipitation representation is a synthesis of all the data available. Thanks to the diffusion of open data policies, the same datasets are also nowadays available in real-time to the general public. For instance, the Norwegian Meteorological Institute provides free access to the weather forecasts and the 700 radar data used in this article via thredds.met.no, while in-situ observations, except the citizen observations, are available via frost.met.no. EnSI-GAP assumes the precipitation fields to be locally stationary, trans-gaussian random fields. The marginal distribution of precipitation at a point is a gamma distribution. Gaussian anamorphosis is used to pre-process data in order to better comply with the requirements of linear filtering. The inverse transformation returns the gamma shape and rate. A special case is 705 considered where uncertainties are so small that the returned analysis values have delta functions as their marginal distributions. EnSI-GAP considers each hour independently and it requires the specification of four parameters that can vary across the domain. The implementation is designed to run in parallel on a grid point by grid point basis. Despite the small number of parameters to optimize, the spatial analysis scheme is flexible enough that it can be applied also when the background ensemble is not representing the truth satisfactorily. An important case is when, in a region, all the ensemble members show 710 no precipitation, while the observations report precipitation. By adding a scale matrix to the flow-dependent background error covariance matrix, the analysis can predict precipitation even where the background is sure that it is not occurring. The examples of applications presented allow for a better understanding of the characteristics of EnSI-GAP and they show how the statistical interpolation can be adapted to meet specific requirements. It can be used to fill in the gaps between observation-rich regions to obtain a continuous precipitation field. The analysis expected value is available everywhere, as 715 it is the background, and in observation-dense regions it can be as accurate as the observations. Thanks to the data transformation, the spread of the analysis PDF is less likely to become unrealistically large because of either large model errors or large variability of observed small-scale precipitation. Within certain limits, determined by the spatial distribution of the observa-tional network, the analysis envelope at a point can be tuned such that it is representative of the distribution of precipitation values determined by atmospheric processes occurring at smaller spatial scales than those resolved by the background. For in-720 stance, in an observation-void region, the EnSI-GAP analysis PDF at a point provides a better estimate than the background for the probability of precipitation exceeding a threshold by an observation hypothetically placed at that point. This is an important result, especially when high-impact weather is involved. Author contributions. CL developed EnSI-GAP, tested it on the case studies and prepared the manuscript with contributions from all coauthors. TN and IS configured EnSI-GAP to work with MET Norway's datasets, collected in-situ observations from opportunistic sensing networks and quality controlled them. CE prepared the radar data. Table 2. Overview of variables and notation for local variables. All variables are specified in the transformed space. All the vectors are column vectors if not otherwise specified. If X is a matrix, Xi is its ith column (column vector) and Xi,: is its ith row (row vector).
2020-06-25T09:08:54.003Z
2020-06-19T00:00:00.000
{ "year": 2021, "sha1": "8d45afc892f060b26ec56bebbb1d15b14cd91be0", "oa_license": "CCBY", "oa_url": "https://npg.copernicus.org/articles/28/61/2021/npg-28-61-2021.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fad793d78e26201adb7d503cf92cf15d911f5df7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
9255484
pes2o/s2orc
v3-fos-license
Characteristics of Systolic and Diastolic Potentials Recorded in the Left Interventricular Septum in Verapamil-sensitive Left Ventricular Tachycardia We studied the electrophysiological characteristics of systolic (SP) and diastolic (DP) potentials recorded during sinus rhythm (SR) in the left interventricular septum of a 27 year-old woman presenting with verapamil-sensitive idiopathic left ventricular tachycardia (VT). During SR, and during VT, SP was activated from ventricular base-to-apex, and DP from apex-to-base. SP and DP were both detected at the site of successful ablation during SR, whereas during VT, DP was detected away from the earliest activation site. Thus, SP apparently reflected a critical component of the reentrant circuit, while DP reflected the activation of a bystander pathway. We recently reported the presence of systolic (SP) and diastolic (DP) potentials during sinus rhythm (SR) on the LIVS in structurally normal hearts and in the absence of ventricular arrhythmias [11].The characteristics of SP and DP, including their a) basal or midseptal location, b) morphology and activation sequence, and c) slow conduction properties, are strikingly similar to those of the pre-systolic potentials detected in patients with verapamil-sensitive VT, though the role they play in the arrhythmic mechanism remains to be clarified. The aim of this study was to define the characteristics and electrophysiological properties of SP and DP in a patient presenting with verapamil-sensitive VT. Case report A 27 year-old woman without apparent structural heart disease was admitted to our hospital for catheter ablation of a paroxysmal tachycardia with right bundle branch QRS morphology and left axis deviation, terminated by intravenous administration of verapamil, 2.5 mg, i.v. Transthoracic two-dimensional echocardiography revealed the absence of left ventricular false tendons.The patient, who had granted her written informed consent, underwent electrophysiological study and catheter ablation procedure after discintinuation of all antiarrhythmic drugs for > 5 half--lives.Multi-electrode catheters were placed transvenously in the high right atrium (HRA), His-bundle region and at the right ventricular apex (RVA) or the right ventricular outflow tract (RVOT).Bipolar intracardiac electrograms were filtered between 30 and 400 Hz and stored with the 12-lead surface electrocardiogram on electronic medium (EPLab/EP Amp™, Quinton Electrophysiology Co., Seattle, WA, USA) for further analysis.A 20-pole, 7 F, A-20™ steerable electrode catheter (Biosense Webster, Inc., Diamond Bar, CA, USA) with 1-mm interelectrode spacing and 3-mm distance between adjacent electrode pairs was introduced retrograde-ly into the left venricle.The LIVS surface was meticulously mapped during sinus rhythm in search of potentials following the ventricular electrogram.We identified two types of low-frequency late potentials along a narrow base-to-apex line at the base or mid segment of the left posterior fascicle.The first (SP) was recorded during systole, in a ventricular base-to-apex direction; the second (DP) was recorded during diastole, in an apex-to-base direction (Fig. 1A).The position of the mapping catheter recording these potentials, viewed in the right and left anterior oblique fluoroscopic projections, is shown in Figure 1C. Single extrastimuli, and trains of stimuli, were delivered from the HRA, RVA and RVOT during , 2).Atrial or ventricular extrastimulation, or increasingly faster atrial or ventricular overdrive pacing, were associated with a gradually longer delay between ventricular activation and SP, consistent with decremental conduction properties of the tissue between ventricular capture and SP activation (Fig. 2A).Furthermore, the disappearance of SP after atrial or ventricular extrastimuli at critically short coupling intervals suggested that the effective refractory period of the tissue between the ventricular myocardium and the recording site of SP had been reached (Fig. 1B).In contrast to SP, atrial or ventricular pacing was associated with incremental conduction properties of DP (Fig. 1B). Single extrastimuli or burst stimulation from the RVA or RVOT reproducibly induced verapamil-sensitive VT with a 105-ms QRS duration and A-V dissociation (Fig. 2), terminated by ventricular burst stimulation.During VT (Fig. 2B) and entrainment pacing from the HRA, the activation sequence of SP and DP was similar to that observed during SR.The site of earliest ventricular activation was in the mid septum, away from the recording of DP (Fig. 2B).Entrainment pacing from the RVOT confirmed a reentrant mechanism of VT, capturing the ventricular myocardium, followed by SP, which was activated in a base-to-apex sequence, similar to that observed during VT.VT was induced while both SP and DP were being recorded during SR (Fig. 3A) from an ablation catheter located in the inferior mid septum (Fig. 3C).Entrainment from that site sug- gested that the pacing site was located on a critical segment of the reentry circuit (Fig. 3B).VT terminated approximately two seconds after a single delivery of radiofrequency at that site, and was no longer inducible thereafter.These combined observations confirmed the diagnosis of left ventricular verapamil-sensitive VT. Discussion The main observations made in the analysis of this case of verapamil-sensitive VT were: 1) SP and DP were detected during SR along a basal-to-apical line on the LIVS surface; 2) both SP and DP were also recorded during VT; 3) during SR, SP and DP were both recorded at the site of successful ablation, whereas during VT, DP was recorded away from the site of earliest ventricular activation. The characteristics of SP and DP, including a) their anatomical location, b) morphology and activation sequence, c) the decremental conduction properties and effective refractory period of SP, and d) the incremental conduction properties of DP, were similar to those found in subjects with struc- (B).See text for detailed explanation. turally normal hearts and no ventricular arrhythmias, as described in our previous report [11].We believe that these potentials are not specifically found in patients with verapamil-sensitive VT. We hypothesized that the substrates responsible for SP and DP are separate thin bundles, connected to the Purkinje network in a single direction [11].Figure 4 is a schematic representation of the impulse propagation during SR and during VT.During SR, SP is activated from base-to-apex and DP from apex-to--base.Both bundles are penetrated by wavefronts propagating through the interventricular septal myocardium.SP and DP were both recorded at the site of successful ablation during SR, whereas during VT, DP was recorded away from that site (Fig. 2B). This observation suggests that SP was a critical component of the reentrant circuit, whereas DP was a bystander pathway passively activated outside the main reentry circuit.During VT, the SP--associated bundle was penetrated in a base-to-apex direction, activating the whole heart through Purkinje fibers connected to the SP-associated bundle and His-Purkinje system.The interval between ventricular activation and DP was shorter during VT than during SR, probably because of incremental conduction properties of the DP. Pre-systolic potentials preceding the Purkinje or bundle branch potentials have been described in patients with verapamil-sensitive idiopathic left VT, which may reflect a critical segment of the reentrant circuit on the basis of high success rates of catheter ablation or from observations made from entrainment studies [4][5][6][7][8][9][10].The basal or mid septal recording sites, morphology and activation sequence of the potentials, and slow conduction properties described, are characteristics strikingly similar to those associated with our SP or DP.Although the SP or DP are frequently found not only in patients with verapamil-sensitive VT but also in normal subjects as described earlier, further studies are needed to determine whether the SP or DP during SR may be indicative of successful ablation site of verapamil-sensitive VT. Conclusions In a patient with verapamil-sensitive VT, SP and DP were detected on the LIVS surface during SR and during VT.This seemed to represent a critical component of the reentrant circuit, and a bystander pathway, respectively. Figure 1 . Figure 1.Systolic and diastolic potentials recorded during sinus rhythm (A) and high right atrial extrastimulation (B), and right (RAO) and left (LAO) anterior oblique fluoroscopic views (C) showing the position of the mapping catheter (MAP); A. Activation of the left bundle branch potentials in a basal-to-apical sequence, preceding activation of the ventricular myocardium from apex to base suggests that the Purkinje-muscle junction was located near the fusion point of the two potentials.Solid and dashed arrows indicate the direction of systolic (SP) and diastolic (DP) potentials activation, respectively; AH = 78 ms; HV = 45 ms; QRS onset to SP = 98-143 ms; SP amplitude = 0.20--0.86mV; QRS onset to DP = 370-498 ms; SP amplitude = 0.37-0.86mV; B: At an S1-S2 coupling interval of 340 ms and S1-S1 basic cycle length of 600 ms, the V2-DP interval is shorter than the V1-DP interval, suggesting incremental conduction properties of the tissue between the site of ventricular pacing and DP.SP is no longer visible after S2, as the effective refractory period of the tissue between the site of ventricular pacing and SP has been reached; C. Recordings of SP and DP (A17-18 to A5-6) are from the basal septum just below the aortic valve to the mid septum; I, II, V1 = surface ECG leads; HRA -high right atrium; HBE -His-bundle electrogram; A19-20 to A1-2 -left ventricular mapping sites from base to apex Figure 2 . Figure 2. Systolic (SP) and diastolic (DP) potentials recorded during ventricular tachycardia induced by extrastimulation at the right ventricular apex (A) and right ventricular outflow tract (B).Solid and dashed arrows show the direction of activation of SP and DP, respectively; A. At an S1-S2 coupling interval of 280 ms and S1-S1 basic cycle length of 500 ms, S2-SP is distinctly longer than S1-SP, particularly at the apical end of SP (A9-10), indicating the presence of decremental conduction between ventricular myocardium and SP activation; B. Induction of ventricular tachycardia by ventricular extrastimulation at an S1-S2 interval of 220 ms and S1-S1 basic cycle length of 500 ms (not shown).The activation sequence of SP and DP during ongoing ventricular tachycardia is similar to that shown during sinus rhythm in Figure 1A.Note that the earliest ventricular activation is at A1-2, in the mid septum, away from the recording of DP.Other abbreviations are as in Figure 1. Figure 3 . Figure 3. Intracardiac electrograms at the site of successful ablation during sinus rhythm (SR) (A) and entrainment pacing (B).Catheter positions during pace-mapping of ventricular tachycardia (VT) with the ablation catheter (ABL) at the site of successful ablation, in the right anterior oblique (RAO) (C) and antero-posterior (AP) (D) fluoroscopic views; A. A spiky Purkinje potentials preceded the ventricular electrograms at the site of successful ablation during SR.Note that SP (solid arrows) and DP (dashed arrows) were both recorded at that site; B. Entrainment pacing at a cycle length of 290 ms shows constant fusion.At the site of successful ablation (ABL1-2) during VT, a Purkinje potential preceded the onset of the QRS complex by -13 ms (solid arrows) and low-amplitude, low-frequency SP or DP (dashed arrows) were recorded.The 300-ms post-pacing interval is equal to the tachycardia cycle length.These findings suggest that the entrainment site (ABL1-2) is located near the exit of a critical segment of the reentrant circuit, which entrainment pacing captured simultaneously with the surrounding ventricular myocardium, thus exhibiting constant fusion; C, D. The ablation catheter (ABL) at the site of successful ablation was in the inferior mid septum.ABL1-2 and 3-4 -distal and proximal ablation catheter recordings.Other abbreviations are as in Figure1. Figure 4 . Figure 4. Hypothetical representation of activation of systolic (SP) and diastolic (DP) potentials, His-Purkinje system and interventricular septum, during sinus rhythm (A) and ventricular tachycardia(B).See text for detailed explanation.
2017-04-13T00:13:30.842Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "d07efe1aac4022d26c2e31af2349cce587d52d88", "oa_license": null, "oa_url": "https://doi.org/10.5603/cj.2012.0075", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "d07efe1aac4022d26c2e31af2349cce587d52d88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212740543
pes2o/s2orc
v3-fos-license
Peripheral insulin administration enhances the electrical activity of oxytocin and vasopressin neurones in vivo Oxytocin neurones are involved in the regulation of energy balance through diverse central and peripheral actions and, in rats, they are potently activated by gavage of sweet substances. Here, we test the hypothesis that this activation is mediated by the central actions of insulin. We show that, in urethane‐anaesthetised rats, oxytocin cells in the supraoptic nucleus show prolonged activation after i.v. injections of insulin, and that this response is greater in fasted rats than in non‐fasted rats. Vasopressin cells are also activated, although less consistently. We also show that this activation of oxytocin cells is independent of changes in plasma glucose concentration, and is completely blocked by central (i.c.v.) administration of an insulin receptor antagonist. Finally, we replicate the previously published finding that oxytocin cells are activated by gavage of sweetened condensed milk, and show that this response too is completely blocked by central administration of an insulin receptor antagonist. We conclude that the response of oxytocin cells to gavage of sweetened condensed milk is mediated by the central actions of insulin. | INTRODUC TI ON Insulin is widely known for its role in glucose homeostasis on peripheral tissues, although its central effects are not yet fully elucidated. Once secreted into the circulation, insulin is transported into the brain by a saturable transport mechanism. 1,2 Both exogenous insulin administration and glucose-stimulated insulin secretion result in a progressive increase of insulin in the cerebrospinal fluid (CSF) in several species, including humans. [3][4][5][6] Accordingly, insulin concentrations in the CSF correlate with levels in plasma, although they are approximately 15-fold lower than plasma concentrations in fasted rats. 3 In the brain, regions sensitive to insulin include the hypothalamus, 7,8 which contains insulin-responsive neurones in several nuclei. [9][10][11] Amongst these, the insulin receptor (InsR) is abundantly expressed in the supraoptic nucleus (SON), [12][13][14] which exclusively contains magnocellular oxytocin and vasopressin cells, and i.p. administration of insulin induces the expression of Fos protein in parvo-and magnocellular oxytocin cells of the paraventricular nucleus in rats. 15 Explants of the hypothalamo-neurohypophysial system, including the SON and its projections to the posterior pituitary, release oxytocin and vasopressin in response to direct application of insulin, 16 and central administration of insulin increases peripheral secretion of oxytocin in mice by a direct action on oxytocin cells. 17 In addition to their classical roles in reproduction, 18,19 stress 20 and water balance, 21 oxytocin and vasopressin have roles in energy homeostasis. 22 Both central and peripheral oxytocin administration exert anorexigenic effects, increase energy expenditure and induce lipolysis. [23][24][25][26] Peripheral administration of both oxytocin and vasopressin can induce the release of insulin from the pancreas [27][28][29][30] and systemically administered oxytocin in humans (administered intranasally) has been reported to curb the meal-related increase in plasma glucose, 31 as well as to improve β-cell responsivity and glucose tolerance in healthy men. 32 Studies using well-validated radioimmunoassays in extracted plasma samples 33 indicate that patients with metabolic syndrome exhibit higher circulating oxytocin concentrations than normal individuals, 34 and patients with diabetes have higher concentrations of vasopressin, as well as copeptin (which is co-secreted with vasopressin). 35,36 In the present study, we examine whether peripheral (i.v.) administration of insulin affects the electrical activity of oxytocin and vasopressin cells in the SON of urethane-anaesthetised rats. The effect of different feeding states, and consequently different blood glucose concentrations, on these responses was also investigated. We investigated the role of brain InsR in these responses by blocking these receptors using an InsR antagonist. Finally, we tested whether the previously reported enhanced electrical activity of oxytocin cells in response to sweet food gavage 37 was mediated by endogenous insulin release acting on brain InsRs. | Animals We used adult male Sprague-Dawley rats weighing 300-350 g. The rats had ad lib. access to food and water and were maintained under a 12:12 hour light/dark cycle (lights on 7.00 am) at a room temperature of 20-21°C. In most experiments, we used fasted rats to reduce the variability of blood glucose and gastric signals (induced by prior food consumption) that could affect neural activity and so, in these experiments, the food was removed overnight (~15 hours). All procedures were conducted on rats under deep terminal anaesthesia in accordance with the UK Home Office Animals Scientific Procedures Act 1986 and a project licence approved by the Ethical Committee of the University of Edinburgh. 38 was given into the third ventricle using a 31-gauge needle inserted through the median eminence; 1 nmol (4.8 µg) of S961 was injected at 1 µL min -1 . We chose a dose expected to be sufficient to block insulin receptors throughout the brain when given i.c.v., although lower than that needed to antagonise insulin actions if given peripherally. The affinity of S961 for both isoforms of the insulin receptor is close to that of insulin itself. 38 Previous studies have reported that bilateral injections of 100 ng of S961 into the arcuate nucleus block the effects of insulin microinjected into the arcuate nucleus on lumbar sympathetic nerve activity in late pregnant rats. 39 Studies using the closely related antagonist S661, which has properties indistinguishable from those of S961, indicated that peripheral doses of 30 nmol kg -1 or more are needed to block the effects of i.v. administration of 30 mmol kg -1 insulin on blood glucose levels. 38 As detailed below, the i.c.v. application of S961 in our hands had no significant effect on plasma glucose concentrations. | Sweet condensed milk gavage In fasted rats, a gavage tube was inserted orally into the stomach to deliver a total volume of 5 mL of sweetened condensed milk (SCM; Nestle, Vevey, Switzerland) diluted 50% v/v in distilled water (40.8 kJ, 1.68 g sugar, 0.24 g fat) at 0.16 mL min -1 . | In vivo electrophysiology Rats were briefly anaesthetised with isoflurane inhalation anaesthesia, and then urethane (ethyl carbamate 25% solution) was in- Design, Cambridge, UK) connected to a PC running spike2, version 7.20 (Cambridge Electronic Design). Most recordings were made from single neurones; in some experiments, the spike activity of two cells was recorded simultaneously; in these cases, the spikes were discriminated and analysed offline using the waveform function of spike2. Recordings were made between 12.00 pm and 5.00 pm (lights on 7.00 am to 7.00 pm). Rats were tested only once with insulin. Supraoptic nucleus neurones were antidromically identified through stimulation of the pituitary stalk by matched biphasic pulses (1 ms, <1 mA peak to peak), which produce an antidromic spike at a constant latency (~10 ms) ( Figure 1A). Oxytocin cells were discriminated from continuous-firing vasopressin cells ( Figure 1B given at 20 μg kg -1 , comprising a transient excitation of oxytocin cells, and no effect or short inhibition of vasopressin cells ( Figure 1E, F). 40,41 CCK was given at the end of the experiments to identify continuously-firing cells. | Effect of i.v. insulin The spontaneous spiking activity of SON neurones was recorded for 20 minutes (basal activity) and for at least 60 minutes after i.v. insulin. Blood samples (50 μL) were taken to measure glucose immediately before administration of insulin or vehicle, as well as 15, 30, 60, 90 and 120 minutes later. | Effect of restoring circulating glucose content in insulin-responsive neurones The basal activity of SON neurones was recorded for 20 minutes, and for another 30 minutes after i.v. insulin. Then, glucose was given i.v. and the spike activity recorded for further 30 minutes. Blood glucose concentrations were measured before insulin, 30 minutes later (ie, before i.v. glucose) and 5 and 20 minutes after the first glucose injection. Only rats exhibiting in the last sample a blood glucose concentration within 15% of the value in the basal sample were used. | Blockade of central InsRs The basal spike activity of SON neurones in fasted rats was recorded for 20 minutes. Then, S961 was given i.c.v. and spike activity recorded for 15 minutes. After this, insulin was given i.v. and the spike activity recorded for another 30 minutes. Blood glucose concentrations were measured using an Accu-Chek Aviva meter (Roche Diagnostics GmbH, Mannheim, Germany) immediately before S961 injection, 15 minutes later (ie, before i.v. insulin) and 30 minutes after i.v. insulin. | Effect of central InsR blockade on SCMstimulated activity of oxytocin cells The basal spike activity of SON neurones was recorded for 20 minutes. Then, rats were injected i.c.v. with either vehicle or S961 and activity recorded for 10 minutes. After this, SCM was gavaged (over 30 minutes) and spike activity recorded for 1 hour. Blood samples (300 μL) were taken immediately before the i.c.v. injection, 10 minutes later (ie, before SCM gavage) and at 30 and 60 minutes after the start of gavage. Blood glucose concentrations were measured immediately after sampling; then, samples were centrifuged in EDTA-coated tubes, and plasma collected and stored at −80°C for insulin measurements using a rat/mouse insulin ELISA kit (catalogue no. EZRMI-13K; EMD Millipore, Burlington, MA, USA). When plotted this way, a negative exponential distribution (the distribution characteristic of random events) becomes a constant 'hazard' proportional to the average firing rate. Deviation from this then become interpretable as periods of decreased or increased excitability. Consensus hazard functions were calculated from the means of hazard functions. | Statistical analysis Data were analysed using Prism, version 6 (GraphPad Software Inc., San Diego, CA, USA). Responses to insulin were analysed by comparing the mean firing rate in the 60-minute after insulin with the (basal) firing rate over the 20-minute control period. The changes were compared using a two-tailed Wilcoxon signed-rank test. The activity of phasic cells was analysed in spike2; detection of a burst of activity was defined by spike activity lasting at least 5 seconds and containing >20 spikes followed by >5 seconds of spike silence between bursts. The mean burst duration, interburst interval and activity quotient (percentage of active time over the total time) over the 20-minute basal and 60 minutes after insulin were compared using Wilcoxon matched-pairs signed-rank test. The effect of glucose on insulin-responsive cells was analysed by comparing the mean change in firing rate (spikes s -1 in 10-minute bins) before and after glucose (ie, 0-30 minutes vs 30-60 minutes) using Wilcoxon matched-pairs signed-rank test. The effect of blockade of central InsRs was analysed by testing whether the mean change in firing rate in the 15-minute after S961 injection was significantly different from 0 (ie, from the basal rate) using a two-tailed Wilcoxon signed-rank test. Then, the mean change in firing rate over 30-minute after insulin was compared with the firing rate in the 15 minutes after S961 using a two-tailed Wilcoxon signed-rank test. One-way ANOVA followed by a post-hoc Bonferroni test was used to compare glucose profiles. The mean change in firing rate over 60 minutes and the glucose profiles between fasted and non-fasted rats were compared using two-tailed Mann-Whitney test and two-way ANOVA followed by post-hoc Bonferroni multiple comparison tests, respectively. We also compared the change in firing rate to determine whether different treatments affect the responses of SON neurones to insulin using two-way ANOVA followed by a post-hoc Bonferroni test. The effect of prior blockade of central InsRs on SCM-induced activity was analysed using a two-tailed Mann-Whitney test comparing the mean change in firing rate over 60 minutes between i.c.v. control-and S961-treated rats. The change in firing rate (in 10-minute bins), blood glucose concentrations and plasma insulin content between the two groups were compared using two-way ANOVA, followed by a post-hoc Bonferroni test. All data are reported as the mean ± SEM. P < 0.05 was considered statistically significant, unless otherwise stated. Recordings were made from 10 oxytocin cells in 10 fasted rats and from 10 cells in nine non-fasted rats (including one double recording). In non-fasted rats, the mean ± SEM (range) basal firing rate of 2.5 ± 0.4 (0.7-4.1) spikes s -1 increased by 0.9 ± 0.3 (0.1-2.5) spikes s -1 (averaged over the 60 minutes after i.v. insulin; P = 0.002, Wilcoxon signed-rank test) (Figure 2A,B). In fasted rats, oxytocin cells responded more strongly ( Figure 2C | Vasopressin cells In six fasted rats, recordings were made from 10 vasopressin cells signed-rank test) ( Figure 2D). In the eight phasic cells, insulin increased the burst duration (from 73 ± 17 seconds to 328 ± 137 seconds). In these cells, the interburst period was reduced (from 64 ± 26 to 61 ± 18 seconds); the activity quotient was increased from 0.6 ± 0.1 to 0.7 ± 0.1, and the intraburst frequency was increased from 6.6 ± 0.8 to 7.2 ± 0.7 spikes s -1 . Eight of 10 vasopressin cells in fasted rats and nine of sixteen vasopressin cells in non-fasted rats increased their activity by more than 10% after i.v. insulin, and the mean response of all vasopressin cells tested was greater in fasted rats than in non-fasted rats, although this did not reach statistical significance (Mann-Whitney U test, P = 0.63). | Hazard functions In oxytocin cells, the hazard functions conformed to the profile previously reported as typical of oxytocin cells, reflecting a prolonged post-spike refractoriness of 30-50 ms followed by a stable plateau of excitability. 42 Insulin did not affect the duration of the post-spike refractoriness but elevated the plateau level of excitability ( Figure 2E). In vasopressin cells, the hazard functions also conformed to the profile previously reported as typical of vasopressin cells, reflecting a post-spike refractoriness of 20-50 ms followed by a period of hyperexcitability (reflecting a depolarising afterpotential) before reaching a stable plateau of excitability. 42 Insulin did not affect the duration of the post-spike refractoriness or the plateau level of excitability but enhanced the post-spike hyperexcitability ( Figure 2F). | Blockade of central InsRs before i.v. insulin To test whether the activation of SON neurones by insulin involves F I G U R E 3 Effect of i.v. glucose infusion in insulin-responsive neurones in fasted rats. A, Blood glucose concentrations were lowered after i.v. insulin, but not i.v. vehicle, in fasted (F) and non-fasted (NF) rats (*P < 0.05, Two-way ANOVA followed by a Bonferroni post-hoc test). B, Blood glucose concentrations after i.v. insulin and after i.v. insulin, and 5% glucose solution injections (arrows: as required) of all 10 rats where neuronal activity was recorded. After 30-minutes of i.v. insulin, the glucose concentration was significantly lower compared to all other blood samples (one-way ANOVA for repeated measures; ***P < 0.001, Bonferroni post-hoc test) with no significant differences between other samples. B, Blood glucose concentrations after i.v. insulin, and 5% glucose solution injections (arrows: 400 µL, *300 µL, *100 µL; * if required) of all animals (n = 10) where neural activity was recorded. C, D, After insulin, no significant differences in firing rate of (C) oxytocin and (D) vasopressin cells in glucose-treated rats were detected compared to non-glucose-treated fasted rats. Data are the mean ± SEM | Effect of blockade of central InsRs on oxytocin spike activity induced by SCM gavage Gavage of food rich in sugars, but not fat, results in a rise of blood glucose and insulin plasma concentration and a progressive increase in the electrical activity of oxytocin cells. 37 Here, we tested whether this involves brain InsRs. Both vehicle-and S961-injected rats exhibited a significant increase in both blood glucose concentration and plasma insulin concentration following SCM gavage ( Figure 5A) with no significant differences between groups (glucose: two-way ANOVA for repeated measures: interaction, In vehicle-injected rats, as expected, 37 to the posterior pituitary, also release large amounts of oxytocin within the brain from their dendrites. This dendritic release is likely to have important effects at relatively local sites, including the amygdala and the ventromedial nucleus of the hypothalamus where abundant oxytocin receptors are expressed but which contain only sparse oxytocin fibres. 23,45 In addition, it has recently become apparent that many magnocellular neurones have extensive axonal projections to diverse brain regions, including notably to the nucleus accumbens. 46 In the present study, systemic administration of insulin increased the electrical activity of both oxytocin and vasopressin SON cells, consistent with previous reports in humans and rats that insulin increases secretion of oxytocin and vasopressin. [47][48][49] As originally conceived in the design of the present experiments, the dose and route of insulin administration followed the conventional design of insulin tolerance tests 50 to produce an acute maintained hypoglycaemia. This bolus injection raises peripheral insulin concentrations above the normal physiological range, which are then rapidly cleared. The evolution of oxytocin cell activity after insulin injections thus mirrored neither the changes in plasma glucose, nor the expected changes in peripheral insulin concentration. Insulin crosses the bloodbrain barrier by an active transport mechanism that is saturated: at least 50% of maximal transport capacity is reached at euglycemic levels of plasma insulin; thus, supraphysiological levels of insulin in the plasma have little additional effect on insulin penetration into the brain beyond that seen at high physiological levels. 1,51 Thus, the expected evolution of CNS insulin following i.v. bolus injection is a progressive rise when peripheral levels are elevated above normal levels, possibly explaining the progressive rise in oxytocin cell activity. | D ISCUSS I ON Brain InsRs play an important role in the control of energy balance as shown by selective genetically-induced decreased expression of brain InsRs which is linked to a peripheral metabolic alterations, including increased food intake, fat and body weight, as well as increased glucose and insulin resistance in rodents. 52,53 Moreover, injection of the InsR antagonist S961 into the ventromedial nucleus increases blood glucose concentration in rats. 54 In the present study, central adminis- In non-fasted rats, which exhibited a more pronounced hyperglycaemia than fasted rats, the responses of oxytocin cells were less prominent than in fasted rats. This may reflect InsR desensitisation in oxytocin cells, similarly to that shown in skeletal muscle in vivo 56 and fibroblasts in vitro, 57 where acute exposition to high glucose concentration reduced insulin-stimulated glucose uptake and impaired InsR intracellular signalling, respectively. Alternatively, because, in fasted animals, blood glucose concentrations fell following insulin administration to concentrations lower than immediately after anaesthesia, this might stimulate the hypothalamic-pituitary-adrenal as occurs in the insulin tolerance test, 55 potentiating the release of oxytocin (and vasopressin). A recent study 17 raised a question about the capacity of SON neurones to respond to insulin administration because insulin given i.c.v. induced an increase in Fos expression after 90-minutes in 13% of the PVN, but not SON, oxytocin cells compared to control mice. Nevertheless, SON neurones appear to be intrinsically sensitive to insulin and glucose because they express InsR 12-14 and the enzyme glucokinase, 58 a marker for glucose sensing. Moreover, vasopressin and oxytocin are released from SON explants in the presence of medium containing glucose and insulin. 16 Although Fos protein has been widely used as a marker for neuronal activation, its lack of expression does not necessarily exclude changes in neural activity as observed in some conditions, and increased spike activity is not invariably linked to Fos expression. 59,60 It appears that insulin might not induce the expected rapid expression of Fos (ie, 60-90 minutes) because Griffond et al 15 reported that, at 1 hour after insulin i.p. (20 mg kg -1 ), there was little expression of Fos in PVN oxytocin cells. A limitation of the present study is that it involved urethane-anesthetised rats. Urethane has long been the anaesthetic of choice for SON electrophysiological recordings because it provides a deep long-lasting stable anaesthesia compatible with transpharyngeal surgery without affecting the physiological responses of SON neurones. 40 However, urethane raises blood glucose concentrations 61,62 by increasing sympathetic tone 63 and consequently increasing gluconeogenesis. Thus, blood glucose concentrations in both non-fasted and fasted anaesthetised rats were higher than in conscious Sprague-Dawley rats. 64 However, they were lower in fasted rats than in non-fasted rats, and changed in the expected manner in response to i.v. insulin. ACK N OWLED G EM ENTS This work was supported by the BBSRC (BB/S000224/1). CO N FLI C T O F I NTE R E S T S The authors declare that they have no conflicts of interest. AUTH O R CO NTR I B UTI O N S The study was designed by GL and performed by LP. LP and GL analysed the data and wrote the paper together. GL had full access to all the data and analyses, and takes responsibility for the integrity of the data and the accuracy of the analyses. DATA AVA I L A B I L I T Y The datasets generated during and/or analysed during the present study are available from the corresponding author upon reasonable request.
2020-03-05T11:10:12.342Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "de63b1ed912e56bde2abec73811581db60e6eeb1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jne.12841", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "c49e46ea8b64aa1c7ed4555d54708017c469e3d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
251101714
pes2o/s2orc
v3-fos-license
Transcriptional and Translational Dynamics of Zika and Dengue Virus Infection Zika virus (ZIKV) and dengue virus (DENV) are members of the Flaviviridae family of RNA viruses and cause severe disease in humans. ZIKV and DENV share over 90% of their genome sequences, however, the clinical features of Zika and dengue infections are very different reflecting tropism and cellular effects. Here, we used simultaneous RNA sequencing and ribosome footprinting to define the transcriptional and translational dynamics of ZIKV and DENV infection in human neuronal progenitor cells (hNPCs). The gene expression data showed induction of aminoacyl tRNA synthetases (ARS) and the translation activating PIM1 kinase, indicating an increase in RNA translation capacity. The data also reveal activation of different cell stress responses, with ZIKV triggering a BACH1/2 redox program, and DENV activating the ATF/CHOP endoplasmic reticulum (ER) stress program. The RNA translation data highlight activation of polyamine metabolism through changes in key enzymes and their regulators. This pathway is needed for eIF5A hypusination and has been implicated in viral translation and replication. Concerning the viral RNA genomes, ribosome occupancy readily identified highly translated open reading frames and a novel upstream ORF (uORF) in the DENV genome. Together, our data highlight both the cellular stress response and the activation of RNA translation and polyamine metabolism during DENV and ZIKV infection. Importance Zika and dengue virus are major causes of morbidity in tropical countries and with a changing climate, they are increasingly seen across the globe. Zika and dengue virus Introduction Zika virus (ZIKV) and dengue virus (DENV) belongs to the family Flaviviridae and contain single-stranded plus-sense RNA genomes [1]. Both viruses are transmitted to humans by tropical Aedes mosquitos that are increasingly reaching first world regions. There have been frequent ZIKV outbreaks since 2007 and infections are linked to multiorgan failure in adults, and fetal defects including microcephaly, other malformations, and fetal demise [2,3]. While still rare in the USA, DENV is one of the worst mosquitoborne human pathogens in the world. DENV infections have increased dramatically and currently stand at 100-400 million infections per year. This makes DENV a leading cause of disease in tropical countries [4]. Clinical symptoms of DENV infection have led to the name "breakbone fever" and severe cases include hemorrhagic fever, dengue shock syndrome, and death [5]. A live-attenuated DENV vaccine has been approved in 2015 and complements mosquito control measures and personal protection (World Health Organization) [6]. ZIKV and DENV share over 90% of their genome sequences but show differences in cellular tropism and molecular effects. Previous studies on the transcriptional and immunological effects of DENV and ZIKV have revealed that both viruses induce a classical Type I interferon anti-viral response [7]. Single-cell sequencing revealed activation of an MX2-related transcription program in the B lymphocytes [8]. Transcriptome meta-analysis in DENV infected human patients and human monocyte THP-1 cells revealed effects on cell junction and extracellular matrix proteins [9]. The transcriptional effects of ZIKV infection appear distinct and changes in cell cycle, mRNA processing, and metabolism have been reported [10,11]. Both viruses also modulate mRNA translation and their replication and translation is a potential vulnerability [12]. For example, ribosome profiling in liver cancer cells suggests that DENV activates ER-linked cellular stress while the effects of ZIKV are not known [13]. To enhance their replication and translation, both viruses mimic the 5 UTR cap structure of cellular mRNAs and escape the cellular recognition [14][15][16]. However, they depend on cellular enzymes such as the eIF4E kinase p38-MNK1 and the RNA helicase eIF4A (DDX2) and both enzymes are accessible with current inhibitors [17][18][19][20]. In this study, we use high-resolution ribosome profiling and RNA deep sequencing (RNA-seq) to define the gene expression and mRNA translation dynamics of the viral and host genomes during ZIKV and DENV infection of human neuronal progenitor cells (hNPCs). Cell Culture and Virus Infection We used primary hNPCs obtained from two different human donors (clone A and clone B). All hiPSC research was conducted under the oversight of the Institutional Review Board (IRB) and Embryonic Stem Cell Research Overview (ESCRO) committees at ISSMS. The participants provided written informed consent. NPCs were seeded onto matrigel-coated tissue culture plates at 300k cells per well of a 6-well plate in NPC medium (DMEM/F12 (Life Technologies, Carlsbad, CA, USA, #10565) supplemented with 1 × N2 (Life Technologies, #17502-048), 1 × B27-RA (Life Technologies, #12587-010), 20 ng/mL FGF2 (R&D, #233-FB-10), and 1 mg/mL Natural Mouse Laminin (Life Technologies, #23017-015). NPCs were fed every second day and split once per week using Accutase. ZIKV IbH 30656 NR-50066 was isolated from the blood of a human in Ibadan, Nigeria, and was obtained by M.K. through BEI Resources, NIAID, and NIH, as part of the WRCEVA program. DENV-2 strain 16,681 was obtained by M.K. from the World Arbovirus Reference Center, University of Texas Medical Branch, Galveston, TX. A total of 6-well plates were coated with poly-L-Ornithine (PLO) and laminin before seeding hNPCs (8 × 10 5 cells per well) [21]. During 24 h post-plating, the cells were infected with ZIKV or DENV at a multiplicity of infection of 1 and incubated at 37 • C. Viral inocula were removed at 3 h post-infection (hpi). Cells were collected and processed for ribosome footprinting at 72 hpi. Ribosome Footprinting Human neuronal progenitor cells (hNPCs) (n = 2) were infected with ZIKV-Ibh and DV-2 16,681 (72 h) followed by cycloheximide treatment for 10 min. Total RNA and ribosomeprotected fragments were isolated following the published protocol [22]. Small RNA libraries were generated using the SMARTer smRNA kit from Illumina. Deep sequencing libraries were generated from two independent clones in replicates (n = 2) and sequenced on the HiSeq 2000 platform. Genome annotation was from the human genome sequence GRCh37 downloaded from Ensembl public database: http://www.ensembl.org, accessed on 17 October 2018. For the virus genome, we used the reference genomes for ZIKV-Ibh and DV-2 16,681 downloaded from the NCBI database [23]. Sequence Alignment First, ribosome footprint (RF) reads were filtered based on the quality score, which kept reads that have a minimum quality score of 25 for at least 75 percent of the nucleotides. Second, the linker sequence (5 -CTGTAGGCACCATCAAT-3 ) was trimmed from the 3 end of the reads. Next, we filtered out the reads shorter than 15nt after the linker-trimming step. All these steps were done by using FASTX-Toolkit (http://hannonlab.cshl.edu/ fastx_toolkit/index.html accessed on 17 October 2018). The ribosome footprint reads were first aligned to the virus genome using Bowtie2 [24]. Specifically, the reads were first mapped to ZIKV and DENV genomes. The unmapped reads were used for downstream analysis of the human genome. To remove ribosomal RNA, the footprint reads were then aligned to the ribosome RNA sequences of GRCh37 downloaded from UCSC Table Browser (https://genome.ucsc.edu/cgi-bin/hgTables accessed on 17 October 2018). After removing the reads aligned to the ribosome RNAs, RF reads were mapped to the human genome sequence GRCh37 downloaded from Ensembl public database: http://www.ensembl.org accessed on 17 October 2018 using HISAT2 with default parameters [25,26]. We only used the uniquely aligned reads for further analysis. Total mRNA sequencing reads were first aligned to the virus genome as done for the ribosome footprint reads. Then the unmapped reads were aligned to the GRCh37 reference using HISAT2 [25,26]. Similarly, as RF reads alignment, we performed the splice alignment for the paired-end mRNA-seq datasets with the default parameters. We only kept the uniquely aligned reads for the downstream analysis. The virus genome alignment quantification was done using featureCounts with the virus annotation (for both RF and mRNA sequencing in both ZIKV and DENV genome) [27]. The human genome alignment quantification for both RF and mRNA sequencing was done using featureCounts with the annotations of the protein-coding genes of GRCh37 as input. Only reads aligned to the exonic regions of the protein-coding genes were used for the downstream analysis using RiboDiff [28]. Footprint Profile Analysis Using Ribo-Diff We used Ribo-diff to analyze the translation efficiency based on the ribosome footprinting and mRNA sequencing data [28]. Genes with at least 10 normalized read count as the sum of RF and RNA sequencing data were used as input, which resulted in 19,821 proteincoding mRNA. Genes with significantly changed translation efficiency were defined by the q-value cut-off equal to 0.05. Motif Analysis The longest transcript was selected to represent each corresponding gene. The 5 UTR sequences of the transcripts were collected for predicting motifs. Both the significant genes with increased or decreased TE and the corresponding background gene sets were used to predict motifs by DREME [29]. The occurrences of the significant motifs (E < 0.05 and p < 1 × 10 −8 from DREME) were called using the FIMO [29] with default parameters for strand-specific prediction of all the 5 UTR sequences. Statistical Analysis All the results were analyzed with two-tailed t-tests unless specified. The significance of motif enrichments was from the DREME program based on the Fisher's Exact Test. A hypergeometric test was performed to test for the significance of the enrichment of the gene overlap in GSEA pathway analysis. Transcriptional Changes Induced by ZIKV and DENV We simultaneously sequenced total RNA and ribosome-protected RNA fragments from uninfected and virus-infected human neuronal progenitor cells (hNPCs) ( Figure 1A, the complete dataset is submitted to GEO). Briefly, two independent clones of hNPCs were differentiated from hiPSC (n = 2) and infected with ZIKV (IbH isolate) and DENV-2 (strain 16681) (referred to as ZIKV and DENV from here on) with an MOI of 1 [23]. hNPCs were differentiated from hiPSC from different healthy donors (n = 2) and complete differentiation was characterized by immunostaining for Nestin and Sox9 ( Figure S1A). Consistent with prior observations, we found that ZIKV was more infective than DENV in hNPCs [30] and therefore we optimized the infection conditions to achieve equal infection rates as indicated by immunostaining for E-protein for both ZIKV and DENV ( Figure S1B). Quality control analysis of the RNA-seq data showed a good correlation between the uninfected and infected replicates ( Figure S1C,D). The read mapping analysis revealed around 5-17 million reads mapped to the human genome (hg19) and 1-4 million reads mapped to the ZIKV or DENV genomes in the respective samples ( Figure S1E and Supplementary Table S1). ZIKV infection resulted in upregulation of 445 mRNAs (q < 0.05) and downregulation of 335 mRNAs (q < 0.05) ( Figure 1B, Supplementary Table S2), and DENV infection in upregulation of 156 mRNA (q < 0.05) and downregulation of 37 mRNAs (q < 0.05) in hNPCs ( Figure 1C, Supplementary Table S3). A comparison of downregulated mRNA showed 26 mRNAs affected by both ZIKV and DENV, and 310 mRNAs or 11 mRNAs being exclusively downregulated by ZIKV and DENV, respectively ( Figure 1D). A total of 112 mRNAs were upregulated by both ZIKV and DENV, while 333 mRNAs and 44 mRNAs were exclusively upregulated by ZIKV and DENV, respectively ( Figure 1E). These data indicate overlapping and distinct effects of ZIKV and DENV infection. 1D). A total of 112 mRNAs were upregulated by both ZIKV and DENV, while 333 mRNAs and 44 mRNAs were exclusively upregulated by ZIKV and DENV, respectively ( Figure 1E). These data indicate overlapping and distinct effects of ZIKV and DENV infection. Signature of ZIKV and DENV Transcriptional Repression STRING analysis of genes whose expression was decreased in ZIKV infected cells (n = 335 revealed three major clusters ( Figure 1F), with genes outside of the two main clusters making up cluster III and falling into two sub-clusters ( Figure 1G) (PPI enrichment p-value < 1.0 × 10 −16 ). Cluster I (n = 66) consisted mainly of histone genes localized to Chr. 6p22 [31] ( Figure 1H) (PPI enrichment p-value < 1.0 × 10 −16 ). The smaller Cluster II (n = 18) is composed of immune response genes (grouped as a systemic lupus signature), and Cluster III (n = 250) included many cell-cycle and mitosis-related genes (Figures 1I and S1F,G). Transcription factor binding site analysis showed a significant (p-value < 6.7 × 10 −14 and q value < 5.6 × 10 −12 ) enrichment of motifs related to NFY, TATA, and members of OCT and FOXO transcription factors ( Figure 1J). The effects of DENV infection were quite distinct. The 37 genes down-regulated by DENV infection included PI3K-AKT-mTOR pathway genes such as SESN3, PI3KR1, and PI3KR3 ( Figure 1K). KEGG analysis showed an enrichment (p-value < 1.3 × 10 −3 and q value < 8.4 × 10 −3 ) of proliferative pathways related to mTOR and the MYC/E2F transcription programs ( Figure 1L). Hence, ZIKV and DENV-infected cells showed downregulation in histone gene expression and proliferation signatures consistent with impaired cell growth. ZIKV and DENV Infection Activate Distinct Transcriptional Programs ZIKV infection resulted in the upregulation of 445 mRNAs (q < 0.05) and DENV upregulated 156 mRNAs (q < 0.05) (Figure 2A,B). KEGG analysis showed that both viruses broadly activated expression programs related to aminoacyl tRNA synthetases (ARS genes) along with MAPK and p53 signals ( Figure 2C-E). ZIKV infection further increased the expression of the constitutively active PIM1 kinase that stimulates translation and neurotrophin signaling, and these have been implicated in ZIKV replication [32,33] (Figures 2C,G,H and S2A, ZIKV targets in red). DENV infection-induced expression of enzymes related to the one-carbon metabolism (e.g., serine hydroxymethyltransferase-2 (SHMT2), methylenetetrahydrofolate dehydrogenase (MTHFD1L), which provide activated methyl groups in the form of S-adenosylmethionine (SAM) for nucleotide synthesis and post-translational modifications ( Figure 2D,I, DENV targets in red). RNA expression of SHMT2, PIM1 and ATF3 has significantly upregulated upon ZIKV and DENV infection as observed by qRT PCR ( Figure 2F). Furthermore, these effects on transcription correspond to different enrichment of transcription factor binding sites in deregulated genes enriched in the ZIKV compared to DENV infected cells. For example, ZIKVinduced transcriptional changes indicate a role for BACH1/2, whereas DENV infection appears to alter the CHOP/ATF3 transcription program (Figures 2I,J and S2A,B). Other transcription factor binding sites are equally represented and include AP1, ETS2, MAZ, SP1, and NFAT ( Figure 2I,J). Together, the expression data reflect distinct cell stress responses triggered by each virus, and they also suggest strategies to augment RNA translation and replication through PIM1, ARS-mediated tRNA loading, and S-adenosylmethionine (SAM) production. transcriptional changes indicate a role for BACH1/2, whereas DENV infection appears to alter the CHOP/ATF3 transcription program ( Figures 2I,J and S2A,B). Other transcription factor binding sites are equally represented and include AP1, ETS2, MAZ, SP1, and NFAT ( Figure 2I,J). Together, the expression data reflect distinct cell stress responses triggered by each virus, and they also suggest strategies to augment RNA translation and replication through PIM1, ARS-mediated tRNA loading, and S-adenosylmethionine (SAM) production. Measuring Translational Effects of Flavivirus Infection We measured effects on host cell mRNA translation by ribosome profiling on ZIKV and DENV infected hNPCs (MOI = 1) at 72 h post-infection in duplicates. Briefly, we used our published method of RiboDiff to measure the translation efficiency of transcripts that are differentially regulated upon virus infection [28]. A summary of reading counts mapped to ribosomal RNAs, virus genome, and the human genome is provided in Supplementary Table S4. On average, 4.4 million RF reads mapped to the coding region of the human genome in uninfected cells, 4.9 million in ZIKV infected, and 4.8 million in DENV infected hNPC samples, corresponding to coverage across 19,821 protein-coding genes. Quality control analysis of replicates showed significant correlations among the replicates with a Pearson coefficient >0.97 ( Figure S3A,B). We used the RiboDiff statistical framework to analyze changes in mRNA translation (38). ZIKV Infection Alters the Translation of Polyamine Metabolism Enzymes We examined host cell mRNAs whose translation was altered upon viral infection. Applying a statistical cut-off at FDR <5% (FDR of <10%), we identified 19 (58) repressed mRNAs and 6 (22) translationally augmented mRNAs in ZIKV infected hNPCs ( Figure 3A, Supplementary Table S5). Applying the same stringent criteria, we identified 7 (16) repressed mRNAs and 19 (33) upregulated mRNAs in DENV-infected hNPCs ( Figure 3B, Supplementary Table S6). While relatively few mRNAs have translational changes disproportional to changes in their transcription, we noticed that both viruses equally affect specific RNAs such as ST8SIA1 (TE up), RPS3A (TE up), and SMOX (TE down). These shared translational effects may point to important biological effects. For example, RPS3A (ribosomal protein S3a) is a component of the 40S ribosome and is critical for viral protein production [34], and SMOX (Spermine Oxidase) oxidizes natural polyamines such as spermine [35]. Notably, ZIKV also upregulated the translation of OAZ2 (Ornithine Decarboxylase Antizyme 2), a key regulator of ornithine decarboxylase that catalyzes the rate-limiting step of the polyamine biosynthesis [35]. A STRING functional protein association network analysis for the top translationally repressed genes in ZIKV infected cells (DNM2, ATXN2L, HDGFRP2, SMOX, BAG3, and GBF1) further suggests effects on membrane and transport processes related to endocytosis, COPI vesicle coating, and receptor uptake ( Figure S3C-H). Translationally upregulated genes include ST8SIA1, ATP5E, RPS3A, HIST2H2AC, SPCS3, and PTCH2 ( Figure 3A). Among these, RPS3A and SPCS3 are notable for their known roles in the translation of flavivirus proteins and the virion production [36,37]. Hence, an unbiased assessment of translational changes induced by ZIKV infection reveals translational control of polyamine metabolism that is required for the unique hypusine modification of the eIF5A translation factor and has been implicated as a target for antiviral therapies [38][39][40]. DENV Shares Key Translational Effects with ZIKV Analysis of DENV-infected cells showed repression of the translation of SOCS3, SMIM15, LSM7, TEF, SMOX, KIAA0195, and NFE2L1 (NRF1) ( Figure 3B). STRING functional protein association network analysis links these effects to JAK-STAT signaling, polyamine metabolism, and RNA and protein stability ( Figure S3I-L). On the other hand, DENV increased the translation of several ribosomal proteins, translation factors (EEF1A1, EIF3L), and other genes (ST8SIA1, SEC61G, TPT1) ( Figure 3B,D). Similar to ZIKV, DENV downregulated the translation of SMOX, the key enzyme involved in polyamine catabolism ( Figure 3B). Hence, DENV and ZIKV infection share effects on polyamine metabolism and DENV has additional pronounced effects on key translation initiation and elongation factors. RNA Regulatory Motifs Enriched in Translationally Dysregulated mRNAs To identify cis-regulatory RNA motifs in the 5 UTRs of mRNAs that are translationally affected by ZIKV and DENV, we compared the TE down and TE up groups for both ZIKV and DENV datasets. We included RNAs with annotated 5 UTRs and compared the groups to each other and to a background list of equally expressed and annotated mRNAs that showed no significant change in their translation compared to the uninfected control sample. For ZIKV, the groups were TE up (n = 83 at p < 0.05, q < 0.3), TE down (n = 228 at p < 0.05, q < 0.3), and background (n = 302); for DENV TE, up (n = 69 at p < 0.05, q < 0.3), TE down (n = 67 at p < 0.05, q < 0.3), and background (n = 208). Despite the relatively small size of groups, we identified four significant (p < 1.0 × 10 −5 ) motifs in the TE up and TE down mRNA subsets for each virus. A binding site analysis shows that these sites correspond to known RNA binding protein sites. For example, the enriched RNA sequence in the TE up group of ZIKV infected cells corresponded to YBX1 and YBX2 binding sites ( Figures 3E,F and S3M,N). We speculate that these RNA binding proteins contribute to some of the translational changes seen in infected cells, although this is pending further biochemical confirmation. Analysis of Translation Efficiencies for the ZIKV and DENV Viral Genomes We mapped 114,775 reads to the ZIKV genome and 277,897 reads to the DENV genome representing 11-and 12-fold coverage of ZIKV and DENV genomes, respectively (Supplementary Table S2 and Figure S4A,B). Compared to host mRNA translation, ZIKV and DENV RNAs were the second and third most highly translated mRNAs in infected hNPCs ( Figure 4A,B). The ZIKV (ZIKV-IbH) and DENV (DV-2-16681) genomes are~10 Kb in length and encode a polyprotein that is post-translationally cleaved by the host and viral proteases (NCBI Reference Sequence: NC_012532.1) [41,42]. This is expected to produce equimolar amounts of proteins; however, ribosome frameshifting can lead to preferential production of specific proteins as shown for West Nile Virus [43,44]. In both viruses, RNA expression and translating fraction (Ribo read counts) are correlated (Pearson r = 0.76; Spearman r = 0.68) ( Figure 4C,D). Detailed analysis of RNA and ribosomal read coverage across the viral genomes shows the variation that may reflect low read counts, technical biases, and ribosome stalling at specific sites [45] (Figure 4E,F). The DENV 3 UTR (453 bases) is highly abundant and shows low ribosome coverage, whereas the DENV 5 UTR (96 bases) shows high ribosome coverage ( Figure 4D,F). In the DENV 5 UTR, we detect potential non-AUG start codons in only +1 and +2 frames, suggesting one or two upstream open reading frames (uORF) that precede the 0 frame start codon of the capsid protein ( Figure S4C). The annotated ZIKV-IbH isolates have a 5 UTR (106 bases) ( Figure S4D). A detailed analysis of AUGs with ribosomal coverage and Ribo/RNA ratio in the capsid protein reveals three potential ORFs indicated by a ribosome peak with ORF1 starting at the position 36 (AUG codon), the second ORF2 initiating at AUG (position 51), and a third ORF3 at AUG (position 81) ( Figure S4E). Similar to DENV, we detect high levels of RNA and Ribosome reads in the ZIKV 3 UTR (428 bases) RNA ( Figures 4E,F and S4F) that has previously been implicated in repressing viral replication [46]. of RNA and Ribosome reads in the ZIKV 3′UTR (428 bases) RNA ( Figures 4E,F and S4F) that has previously been implicated in repressing viral replication [46]. (E,F), RNA, and ribosome coverage across the ZIKV (E) and DENV (F) genome mapped to virus polyprotein. ZIKV and DENV showed relatively higher RNA reads at the 3 UTR and higher ribosomal coverage at 5 UTR suggesting differential RNA abundance and translation from UTR regions. Discussion We provide a detailed and unbiased analysis of the transcriptional and translational dynamics of ZIKV and DENV infection in human neuronal progenitor cells. Previous studies in different cell types have reported many effects on the interferon response [7], MX2-related transcription program in B cells [8], and changes in the expression of extracellular matrix proteins [9], cell cycle, RNA processing, and cell metabolism [10,11]. Our analysis highlights cellular stress responses to viral infection, and on the other hand, we observe the activation of mechanisms that support the viral life cycle. Regarding stress responses, our data indicate that DENV preferentially triggers an unfolded protein response (UPR) program related to the ATF3/CHOP/DDIT3 transcription factors, whereas ZIKV favors a different, BACH1/2-NRF2 driven antioxidant program. Importantly, this ZIKV-induced redox program has previously been implicated in facilitating ZIKV replication [47][48][49][50]. We see other changes that also appear to enhance viral replication and translation. For example, increased expression of rate-limiting one-carbon metabolism enzymes such as SHMT2 and MTHFD1L provides activated methyl groups that are needed for nucleotide biosynthesis and viral replication; these mechanisms have been studied in cancer, and inhibitors are available [51][52][53][54]. We also notice an increase in tRNA loading enzymes -aminoacyl-tRNA synthetase (ARS) and expression of the constitutive active PIM1 kinase that stimulates protein synthesis in an mTOR independent manner [32]. Notably, both ARS and PIM1 have recently been implicated in flavivirus and ZIKV translation and replication [33,55]. Hence, the gene expression changes reflect both cellular responses and viral survival strategies and support potential cellular targets in metabolism and translation as novel antiviral strategies. The re-programming of protein synthesis away from host mRNAs towards viral protein synthesis is a particularly stunning aspect of viral biology [12,56,57]. Other studies have explored the complex biochemical mechanisms [12,56,57]. Our study confirms preferential translation of viral RNAs, and we further provide a catalog of translational changes that include potential opportunities for antiviral attack. For example, ZIKV and DENV infected cells show downregulation of SMOX translation, which will decrease polyamine catabolism and thus increase polyamine availability for viral replication and translation [58]. A recently reported polyamine prodrug is thought to act in the exact opposite manner and increase SMOX expression, thereby depleting the required metabolites [59,60]. This pathway has been implicated as a broad spectrum anti-viral strategy beyond ZIKV and DENV and, intriguingly, both viruses target the translation of a key polyamine catabolic enzyme [38,[59][60][61][62][63][64]. We detect other translational effects that have been implicated in viral biology. For example, the ribosomal protein RPS3A stands out among translationally activated host mRNAs, and RPS3A has been shown to interact directly with the DENV NS1 protein and augment the viral RNA translation [36]. Similarly, ZIKV infected cells increase translation of the Signal Peptidase Complex Subunit 3 (SPCS3) mRNA which has been identified as a genetic requirement for virion production for several flaviviruses [37]. Together, we provide a detailed accounting of the transcriptional and translational effects of DENV and ZIKV infection, however, further follow-up and experimental validation of these effects are much needed in hNPCs and other models of DENV and ZIKV infection. The data presented here is based on high throughput sequencing studies that are statistically robust and provide descriptive data indicating the potential gene expression programs and translational changes induced by virus infection. Further investigation based on our analysis would help to underscore the accuracy and relevance of biochemical studies and may inform the development of targeted antiviral therapies by inhibiting host factors relevant for viral replication.
2022-07-28T05:11:13.330Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "08ed347721ac1acb1ae680cd772e2c76f0688e52", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "08ed347721ac1acb1ae680cd772e2c76f0688e52", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
269633081
pes2o/s2orc
v3-fos-license
Molecular histopathology of matrix proteins through autofluorescence super-resolution microscopy Extracellular matrix diseases like fibrosis are elusive to diagnose early on, to avoid complete loss of organ function or even cancer progression, making early diagnosis crucial. Imaging the matrix densities of proteins like collagen in fixed tissue sections with suitable stains and labels is a standard for diagnosis and staging. However, fine changes in matrix density are difficult to realize by conventional histological staining and microscopy as the matrix fibrils are finer than the resolving capacity of these microscopes. The dyes further blur the outline of the matrix and add a background that bottlenecks high-precision early diagnosis of matrix diseases. Here we demonstrate the multiple signal classification method-MUSICAL-otherwise a computational super-resolution microscopy technique to precisely estimate matrix density in fixed tissue sections using fibril autofluorescence with image stacks acquired on a conventional epifluorescence microscope. We validated the diagnostic and staging performance of the method in extracted collagen fibrils, mouse skin during repair, and pre-cancers in human oral mucosa. The method enables early high-precision label-free diagnosis of matrix-associated fibrotic diseases without needing additional infrastructure or rigorous clinical training. . Overview of the numbers of samples, images and ROI's used in the study.*The pathologists select these 25 samples according to their exclusion criteria, where they choose samples that do not have other complications. Metrics Explanation and Clinical relevance Human Oral The overall collagen density change in sub-epithelium with respect to keratin density change in Submucous SE/E Epithelium. Provides an overview of how the density of these two molecules is effected when the pre-cancer Fibrosis progresses to cancer.The density change of keratin in upper epithelium relative to lower epithelium. UE/LE The disease originates in lower epithelium with high keratin expression relative to upper epithelium.Lower value denotes normal and the value increases with disease progression.The density of collagen in upper sub-epithelium relative to keratin in lower epithelium.USE/LE These two sub-layers are closely interacting with each-other and is at the junction of the site of invasion.This is an important metric to evaluate the relative collage keratin interaction in early onset of pre-cancer and therefore, a valuable early-marker. The density change of collagen in upper sub-epithelium relative to lower sub-epithelium.LSE/USE This represents papillary and reticular regions of sub-epithelium which increasingly gets indistinguishable with advancement of oral sub-mucous fibrosis.Human Oral PKL/UE The density of keratin in para-keratinized layer relative to upper epithelium. Leukoplakia The density of para-keratinized layer at this junction indicates progression of the disease. The keratin in the PKL layer are highly compressed and difficult to effectively stain due to antigen blocking, but are easily discernible with auto-fluorescence. UE/LE The density of keratin in upper epithelium relative to lower epithelium. Table S2.Overview of the clinical metrics used in the study.This table refers to the metrics used in Fig 4 and Fig 5 of the article.For mouse dermal fibrosis, all intensity measurements are taken relative to the stratum corneum of the skin (outer most exposed layer of skin) in the Fig 7 of the article . Figure S2 . Figure S2.Effect of epitope retrieval in reducing the formalin-induced fluorescence (a,b) are the autofluorescence images of the same mouse skin tissue section taken before and after heat-induced epitope retrieval (same tissue section, image captured at similar location before and after processing).The yellow arrows show the regions with non-uniform fluorescence before epitope retrieval.(c) FTIR spectra of the adjacent mouse skin tissue sections with and without epitope retrieval.The spectra shows that the peaks of methylene bridges (formalin-derived crosslinks that cause unspecific autofluorescence) are substantially reduced upon epitope retrieval. Figure S3 . Figure S3.Benchmarking MUSI-tAF super-resolution in collagen nanofibers.A region of thinly deposited collagen-I under (a) diffraction-limited (mag 20×), (b) MUSI-tAF, and (c) SEM showing that MUSI-tAF images collagen nanofibers and can distinguish collagen spaced 70-80 nm apart (edge to edge).A denser region of rat tail collagen-I fibers under (d) diffraction-limited, (e) MUSI-tAF, and (f) SEM showing matching regions.The (g) diffraction-limited and (h) MUSI-tAF images of the same region demonstrate enhancement of resolution in closely spaced collagen fibers (yellow arrows).The profiles along the white line is shown in (i) demonstrating that MUSI-tAF super-resolves a nearly flat profile in DL imaging of four closely spaced collagen bundles.The colored boxes in the images (d,e,f) highlight the regions of the fiber structure visualizing the matching between the DL, MUSI-tAF and the SEM images respectively.The boxes of the same color show the matching region. Figure S4 . Figure S4.Collagen autofluorescence provides a faithful distribution and localization than its labeled microscopy.The left column shows collagen autofluorescence (blue channel) and the right column is the collagen labeling (red channel) taken from the same region.The zoomed images show that although labeling can provide a visual correspondence of collagen density it has poor localization of fibers.The images were acquired at 20× magnification, 0.8 NA. Figure S5 . Figure S5.Comparing collagen nanostructures in MUSI-tAF autofluorescence and labelled fluorescence.MUSI-tAF images of (a-d) purified collagen-I and (e-h) mouse skin tissue sections.Images derived from (a,b,e,f) tissue autofluorescence (tAF) in blue emission, (c,d,g,h) same tissue immunolabeled with collagen-I and fluorescent probe having red emission.All insets show the corresponding source (diffraction-limited) images (20×, 0.8 NA objective).(a,c,e,g) are full-field super-resolved images and (b,d,f,h) are a small region illustrating super-resolved structures.Co-localization of structures of (i,j) pure collagen-I and (m,n)tissue from autofluorescence (blue) and labeled (red) fluorescence.Similarity maps of MUSICAL structures in the red and blue colors images in (i,j) are shown in (k,l) for Collagen-I, and similarly similarity maps of the structures in red and blue colored images in (m,n) are shown in (o,p) for the mouse skin tissue. Figure S6 . Figure S6.Simulated sample showing emitters placed along a straight line at a distance of 10 µm, showing the rejection of out-of-focus light property in MUSI-tAF.The imaging process was simulated using a 20× and 100× objective with Poisson noise.(a) The line passes through several planes at different z-positions and is the ground-truth.The color bar indicates the distance of the emitter and the coverslip.(b) 20× objective diffraction limited image over 100 frames created for the sample.The edges of the line (arrows) show the effect of the point-spread-function being wider at off-focus regions.(c) MUSI-tAF results for the 20× objective where the focal section is filtered from the off-focus emission.The dotted line is the profile of the corresponding diffraction limited line showing the region of rejection on the edges.(d) Profile plot of a 100× diffraction limited image. Figure S7 . Figure S7.Histology of representative full oral tissues sections with varying levels of pathology associated with oral carcinoma.(a) Hematoxylin and eosin (HE) stain of the normal oral mucosa (NOM), the darker stained outer epithelium (black arrows), and sub-epithelium (red arrow).The region below the black line is majorly muscle tissues; the bold black arrow shows the epithelium's rete pegs, i.e., undulations at the base of the epithelial layer.(b) HE of oral submucous fibrosis (OSF) shows denser epithelium and flattened rete pegs (bold arrow).(c)HE of oral submucous fibrosis with dysplasia (OSFD), ) with flattened rete pegs (bold arrow), dense sub-epithelium, and perivascular fibrosis (red arrows).(m) HE of oral squamous cell carcinoma (OSCC), with the epithelial basement membrane, lost continuity and invaded into subepithelium (bold arrows). Figure S8 . Figure S8.Multi-scale MUSI-tAF images of healthy and fibrotic skin tissues.(a-d) MUSI-tAF images of a healthy skin tissue visualized at image sizes of (a) 10000×13400 (full field of view), (b) 4000×5000, (c) 2000×3000, and (d) 1000×1000.The four image dimensions have been illustrated for progressive pathological fibrosis at (e-h) 18 days, (i-l) 30 days, (m-p) 60 days, and (q-t) 180 days; here, while 18 days and 30 days treatment are early fibrosis, 60 days and 180 days are advanced fibrosis.(u-t) images of scar tissue collected from 60 days of wound healing at the same dimensions as healthy skin.scale bar= 2µm. Figure S9 . Figure S9.Comparison of epifluorescence and MUSI-tAF intensity ratios in different regions of the oral tissue.The figure highlights how MUSI-tAF can make better delination of disease stages compared to epifluorescence imaging of autofluorescent tissue matrix.This figure corresponds to Fig 4 (a-e) in the main manuscript. Table
2024-05-10T06:17:47.811Z
0001-01-01T00:00:00.000
{ "year": 2024, "sha1": "7ff4452aff94cd028b47f22401211ce21b0c4c6f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-61178-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ed1dd5dd0caf68e554cd6de40c8f3d5de08c388", "s2fieldsofstudy": [ "Medicine", "Biology", "Materials Science" ], "extfieldsofstudy": [] }
270232203
pes2o/s2orc
v3-fos-license
Predicting Kawasaki disease shock syndrome in children Background Kawasaki disease shock syndrome (KDSS) is a critical manifestation of Kawasaki disease (KD). In recent years, a logistic regression prediction model has been widely used to predict the occurrence probability of various diseases. This study aimed to investigate the clinical characteristics of children with KD and develop and validate an individualized logistic regression model for predicting KDSS among children with KD. Methods The clinical data of children diagnosed with KDSS and hospitalized between January 2021 and December 2023 were retrospectively analyzed. The best predictors were selected by logistic regression and lasso regression analyses. A logistic regression model was built of the training set (n = 162) to predict the occurrence of KDSS. The model prediction was further performed by logistic regression. A receiver operating characteristic curve was used to evaluate the performance of the logistic regression model. We built a nomogram model by visualizing the calibration curve using a 1000 bootstrap resampling program. The model was validated using an independent validation set (n = 68). Results In the univariate analysis, among the 24 variables that differed significantly between the KDSS and KD groups, further logistic and Lasso regression analyses found that five variables were independently related to KDSS: rash, brain natriuretic peptide, serum Na, serum P, and aspartate aminotransferase. A logistic regression model was established of the training set (area under the receiver operating characteristic curve, 0.979; sensitivity=96.2%; specificity=97.2%). The calibration curve showed good consistency between the predicted values of the logistic regression model and the actual observed values in the training and validation sets. Conclusion Here we established a feasible and highly accurate logistic regression model to predict the occurrence of KDSS, which will enable its early identification. Introduction Kawasaki disease (KD), also known as cutaneous mucosal lymph node syndrome, is an acute autoimmune vasculitis.KD, which usually occurs in children < 5 years of age, is an acute selflimited febrile disease characterized by a combination of several characteristic clinical symptoms, including polymorphic rash, nonsuppurative conjunctivitis, red mouth and lips, myrtle tongue, erythema and edema of the hands and feet, and cervical lymphedema (1).In addition to the characteristics of KD, patients with Kawasaki disease shock syndrome (KDSS) show poor perfusion or systolic hypotension.The systolic blood pressure of affected children is persistently lower (by ≥20%) than the normal low value of systolic blood pressure of children of the same age and requires volume expansion or vasoactive drugs to maintain blood pressure within the normal range. The cause of KDSS hypotension is currently not fully understood.Many studies have shown that systemic vasculitis changes in acute KD can lead to persistent capillary leakage, abnormal cardiac systolic function, and abnormal regulation of inflammatory cytokines, among other issues.Many factors may contribute to KDSS hypotension (2).Maddox et al. calculated the data of KD patients from four large medical databases in the United States, 2006-2018, and found that the incidence of KDSS was 2.8-5.3%, and showing an upward trend in recent years (3).The early symptoms of KD may be atypical in some children with KDSS (4).Children with KDSS usually experience rapid progression and shock, often have stronger inflammatory reactions, and can be prone to coronary artery disease and multi-organ dysfunction, so its early recognition is particularly important.The pathogenesis of KDSS is complex and may be related to capillary leakage and decreased peripheral vascular resistance (5,6), which cannot be predicted by a single index. Therefore, this study aimed to build a KDSS risk prediction model for children with KD, identify the best risk prediction tool, improve the prognosis of KDSS, and reduce the sociomedical burden by accurately identifying high-risk KDSS groups and implementing more preventive interventions. Patient population This retrospective study included a total of 74 children diagnosed with KDSS at Beijing Children's Hospital, Capital Medical University, in 2021-2023.Two to three children with KD with stable hemodynamics at the same time (2 weeks before and after diagnosis) were randomly selected.A total of 156 children diagnosed with KD during the same period were also included.All children met the KD criteria proposed by the 2021 American College of Rheumatology/Vasculitis Foundation Guideline (1), the Japan Guideline written by Kobayashi et al. (7), and the KDSS criteria proposed by Kanegaye et al. (8) The exclusion criteria were as follows: (1) incomplete clinical data; and (2) the presence of serious underlying diseases (septic shock, anaphylactic shock, hypotension, coronary artery malformation, congenital heart disease.). This study was approved by the Ethics Committee of Beijing Children's Hospital affiliated with Capital Medical University (approval no.2024-E-036-R). Clinical data collection The Jiahe platform system of Beijing Children's Hospital was used to capture clinical data and establish KD database.No data screening or deletion was performed to ensure data integrity and objectivity.We collected data on sex, age, body mass index (BMI), fever duration, symptoms, and results of laboratory tests conducted prior to the onset of KDSS and treatment with intravenous immunoglobulin.All variables were collected within the first 12hrs of admission.Possible symptoms included maculopapular, diffuse erythroderma, or erythema multiforme-like rash; bilateral bulbar conjunctival injection without exudate; erythema and cracking of lips, strawberry tongue, and/or erythema of oral and pharyngeal mucosa; suppurative cervical lymph node enlargement; erythema and edema of the hands or feet (acute phase); and/or periungual desquamation (subacute phase).Laboratory tests included erythrocyte sedimentation rate (ESR), white blood cell count (WBC), absolute neutrophil count (ANC), platelet (Plt) count, hemoglobin (Hgb), C-reactive protein (CRP), highsensitivity cardiac troponin I (hs_cTnI), brain natriuretic peptide (BNP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin (Alb), fibrinogen, total protein (TP), serum Na, serum K, serum Ca, serum P, Cr, total bile acid (TBA), glycocholic acid (CG), prothrombin time, international standard normalized ratio, partial thromboplastin time, thrombin time, D-dimer, antithrombin III activity.Echocardiographic results (left ventricular enlargement [LVE], decreased ejection fraction (DEF), presence or absence of pericardial effusion (PF), coronary artery dilatation (CAD). Statistical analysis A descriptive analysis was performed using SPSS version 26.0, while other statistical analyses were performed using R software (version 4.1.3).The data were randomly divided into a training set and a validation set at a 7:3 ratio.The training set was used for feature selection and model construction, while the validation set was used to evaluate the effectiveness of the training model. Univariate analysis Normally distributed quantitative data are expressed as mean ± standard deviation, and intergroup comparisons were performed using the t-test.Non-normally distributed data are expressed as median and interquartile range, and groups were compared using the Mann-Whitney U Test test.Conversely, categorical data are expressed as frequency and percentage (%) and were compared using the c 2 test. Variable selection and prediction model establishment A logistic regression model and a lasso regression model were used to screen predictors of KDSS.The logistic regression model used a stepwise method to screen variables (P < 0.05).Use of the lasso regression can reduce the collinearity between variables and ensure that the subsequently generated model is not overfitted.Considering the characteristic variables proposed by the above two methods, the five best variables were selected to establish a prediction model.A collinearity analysis was used to further determine whether there was an association between the included variables.A variance inflation factor (VIF) was used to quantify collinearity severity.Thereafter, variables without collinearity were included in the binary logistic regression analysis. Model performance evaluation The accuracy of the model in the training set was assessed by receiver operating characteristic (ROC) curve.The discrimination of the model was determined by calculating the area under the ROC curve (AUC).The calibration curve was used to assess the goodness of fit between the predicted model and the observed data.The model's calibration was assessed by comparison of the predicted and observed values and visualized using a calibration curve graph with 1000 bootstrap resampling procedures.The model was visualized by a nomogram. Model validation The data from the test set were substituted into the model to predict the occurrence of KDSS.Calibration curves were plotted to verify the model's accuracy and consistency, and its predictive value was assessed. Results A total of 74 children with confirmed KDSS admitted to the hospital in January 2021 to December 2023 were included in this study; another 156 children with KD admitted to the hospital during the same period were included.The training set included 52 children with KDSS and 110 children with KD, while the validation set included the remaining 22 children with KDSS and 46 children with KD.Variables whose data were incomplete were interpolated through the mice package. Multivariate analysis Based on the univariate analysis results, 24 significant factors (P < 0.05) were identified on the multivariate analysis: BMI, rash, Hgb, Plt, ANC, CRP, BNP, hs_cTnI, K, Na, Ca, P, TP, Alb, Cr, AST, ALT, TBA, CG, FIB, LVE, EF, PF, and CAD.Two regression methods were used to screen the variables.First, nine variables were screened in the logistic regression analysis, including rash, Plt, Na, Ca, P, Cr, AST, EF, and BNP.To prevent overfitting, we applied Lasso regression for variable selection once again.The following 10 indicators were screened by the lasso regression analysis: BMI, rash, BNP, K, Na, P, Alb, AST, ALT, and TBA (Figures 1, 2).We screened five optimal variables from the two regression models, including rash, BNP, Na, P, and AST. Collinearity analysis Five variables, including rash, BNP, Na, P, and AST, were screened from the two regression models by the collinearity analysis.We use tolerances and VIF to quantify the collinearity severity.The tolerance of each variable was >0.2 and VIF was <5, indicating no significant collinearity between the two variables. Logistic regression models evaluated in the training set A ROC curve analysis used to evaluate the discriminant performance of the logistic regression model revealed an AUC=0.979,sensitivity=96.2%,and specificity=97.2%(Figure 3).Consistency was checked using the calibration curve method.The calibration curve of the logistic regression model drawn in the training set showed that the calibration curve fit the standard curve well and the model calibration effect was good (Figure 4). Establishment of nomogram model According to the binary logistic regression analysis results, R software was used to construct a nomogram model (Figure 5) to visualize the model.To predict the incidence of KDSS, a line perpendicular to the axis of the corresponding index points was drawn on the column chart according to the values of rash, BNP, Ten risk factors selected by the lasso regression analysis.The two dotted vertical lines were drawn at the optimal scores by minimal criteria and 1-S.E.criteria (body mass index, rash, brain natriuretic protein, potassium, sodium, phosphorus, albumin, aspartate aminotransferase, alanine aminotransferase, and total bile acid).S.E., standard error.Lasso coefficient profiles of the 24 risk identified factors. Na, P, and AST of each child with KD at admission, and the index points were summed.Next, the sum over the total number of points was determined, and a line plotted perpendicular to the risk axis to indicate the probability of KDSS occurring in children with KD. Validation of logistic regression models in test sets The calibration diagram of the test set showed that the predicted values of the logistic regression model were in good agreement with the actual values (Figure 6).The correct prediction rate of the model is 1 -(4/68) = 94.12%(Table 3). Discussion KDSS is a severe subtype of KD that features acute onset and severe illness.Because KD shares features with other febrile illnesses in childhood, many common symptoms have been identified (9) that prevents its easy recognition.The early manifestation of KDSS is incomplete, and it is easily misidentified.Therefore, the best time for diagnosis and treatment is commonly missed, which can lead to death in severe cases.The incidence of KDSS is low, and the study with the largest sample size in China was from Taiwan, where the incidence of KDSS in KD was 1.45% (10), much lower than the 2.8-5.3% reported in the United States (3), adding to difficulty understanding the disease.KD can involve the coronary system, and studies have shown that the risk of coronary artery injury is higher in KDSS than in KD (11).The lack of the early recognition of KDSS and delayed treatment may lead to permanent coronary structural damage (12), thereby increasing the risk of long-term complications.Therefore, the early diagnosis and treatment of KDSS is particularly important. Rather than screening for the most relevant risk factors, this study aimed to screen for the most parsimonious risk factors from among the many independent risk factors of KDSS and construct the best logistic regression model to predict early KDSS. The incidence of KDSS is low; in fact, only 74 children with KDSS were enrolled in the largest pediatric hospital in China over the past 3 years.Considering the limited sample size and as many as 24 clinical/laboratory variables as being statistically significant, traditional methods may be unsatisfactory.We used a logistic regression stepwise method and a lasso regression analysis to screen variables together.Next we obtained the best data introduce into the model.Finally, we screened out five variables, including rash, BNP, Na, P, and AST.We found that rash, BNP, hyponatremia, hypophosphatemia, and AST were independent risk factors for KDSS.These findings both support the existing hypothesis on the pathogenesis of KDSS and provide new insight into its diagnosis and management.The ROC curve of the combined predictive model for predicting CAD in training set.Rash was identified as an independent risk factor for KDSS, a finding that is consistent with the previous literature (2).A persistent extensive rash may reflect a more severe systemic, aggravating increased microvascular permeability and capillary leakage. BNP concentration is a quantitative plasma biomarker for the presence and severity of hemodynamic cardiac stress and heart failure (HF).End-diastolic wall stress, intracardiac filling pressure, and cardiac volume appear to be its main triggers.BNP has high prognostic accuracy for a patient's risk of death and HF hospitalization (13).KD is a systemic vasculitis disorder.In animal models of KD, Immunoglobulin A and C3 immune complexes are deposited within the cardiovascular tissue (14).KDSS is a more severe vasculitis than KD.In contrast, myocardial injury in KDSS is more severe than that in KD.As described in this article, LVE in children with KDSS was 44.2%, while LVE in children with KD was 2.7%, with a significant statistical difference (P < 0.01).Ejection fraction in children with KDSS was significantly higher than that in children with KD, confirming that the former have more severe cardiac dysfunction.BNP is synthesized and released by ventricular myocardial cells in response to stress on myocardial walls caused by volume or pressure overload and ischemia.In addition to hemodynamic stress, inflammation in myocardial tissue can induce BNP production (15).Therefore, BNP may be a sensitive indicator of KDSS. Hyponatremia and hypophosphatemia are two electrolyte disorders associated with KDSS.In patients with KDSS, persistent hypotension and inadequate effective circulatory volume may lead to an increased renal excretion of Na and P, leading to hyponatremia and hypophosphatemia.Hypophosphatemia can be considered the superposition of myocardial depression, peripheral vascular dilation insufficiency, and acidosis in shock.Hypophosphatemia occurs in many critically ill patients and usually indicates severe disease (16, 17).One study showed (18) that, among patients with sepsis but without CKD, the risk of death and shock in patients with hypophosphatemia was significantly higher than that in patients without hypophosphatemia.The conditions of patients with KD and hyponatremia are generally more serious than those of patients without hyponatremia.In particular, in terms of the incidence of coronary artery lesions, the odds ratio of patients with KD with versus without hyponatremia was as high as 4.78 (19).These electrolyte disorders may further affect cardiovascular function, aggravate shock symptoms, and even be life-threatening in severe cases.These findings are consistent with those of this study.Sodium is a key electrolyte in maintaining intra-and extracellular water balance and neuromuscular function.Hyponatremia may lead to neurological symptoms such as headache, nausea, convulsions, and even coma. Phosphorus is an essential electrolyte for maintaining intra-and extracellular energy metabolism and bone health.Hypophosphatemia may cause symptoms such as muscle weakness and arrhythmia.AST, an enzyme found in the liver and other tissues, is not a specific indicator of KDSS.AST is mainly found in the mitochondria of cells in the liver, kidneys, brain, lungs, and skeletal muscle (20).KDSS, a serious complication of KD characterized by persistent hypotension and inadequate effective circulation, leads to multiorgan dysfunction of the digestive, respiratory, and nervous systems (21) and further promotes an elevated AST.Some reports suggest that elevated AST levels may be associated with hepatocyte damage caused by inflammation (11).In recent years, the ratio of AST and ALT levels, which usually indicate chronic liver disease severity, can predict prognosis (22, 23).Adult patients with higher baseline AST/ALT levels are more likely to develop fatal cardiovascular disease (24).Other studies demonstrated that AST/ALT is a risk factor for coronary artery injury at admission (25).This supports the possible association between AST and a severe inflammatory response. These findings encourage the further exploration of potential new mechanisms of KDSS pathogenesis.It is worth noting that our model is the most rigorously validated for KDSS predictions.The model performed well in the training set (AUC=0.979) and achieved good differential diagnostic performance in the validation set.The calibration curve analysis further confirmed its reliability for practical application. Our study is the first to integrate multiple clinical indicators into a prediction model, thus providing a powerful tool for the early identification of KDSS.Compared with a single biomarker, the model comprehensively considers multiple pathophysiological processes, giving it higher discrimination ability.The risk of developing KDSS can be determined according to the results of routine examinations on admission, which is conducive to the timely implementation of individualized treatment and thereby reduces the risk of complications. Certainly, our study has some shortcomings: 1) as a singlecenter study, the universality of the model requires verification in largescale multi-center studies; 2) only clinical routine indicators are used at present, and model performance may be further improved in the future by the integration of multimodal data such as imaging findings; and 3) we focused on model construction, but the explanation of the underlying etiology and pathogenesis of KDSS remains insufficient, and further mechanism research of a larger sample is needed. In summary, our multi-biomarker prediction model based on large-scale data is a valuable tool for the early identification of KDSS that lays a foundation for elucidating the heterogeneity of the disease and provides a new idea for clinical transformation.Compared with existing studies, our work made breakthroughs in sample size, validation rigor, application value evaluation, and other aspects.We have reason to believe that, through its continuous efforts to improve the model and explore the pathogenesis of KDSS in depth, it will make a major contribution to the accurate diagnosis and treatment of KDSS in children. FIGURE 4 FIGURE 4 Calibration of nomogram for predicting Kawasaki disease shock syndrome in the training set. FIGURE 5 FIGURE 5Nomogram for predicting Kawasaki disease shock syndrome among the training set. FIGURE 6 FIGURE 6Calibration of nomogram for predicting Kawasaki disease shock syndrome among the testing set. TABLE 1 Characteristics of KDSS versus KD in the training set. TABLE 2 Coefficients of binary logistic regression for predicting KDSS among the training set. TABLE 3 Predictive values of nomogram model for internal validation set.
2024-06-05T15:21:16.455Z
2024-06-03T00:00:00.000
{ "year": 2024, "sha1": "73d35793ab0f9667dee2af955842e87976407ee8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1400046/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "242f41ed4f1175070f92c28c7391e50f07c826eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15641763
pes2o/s2orc
v3-fos-license
Density Matrix Expansion for Low-Momentum Interactions A first step toward a universal nuclear energy density functional based on low-momentum interactions is taken using the density matrix expansion (DME) of Negele and Vautherin. The DME is adapted for non-local momentum-space potentials and generalized to include local three-body interactions. Different prescriptions for the three-body DME are compared. Exploratory results are given at the Hartree-Fock level, along with a roadmap for systematic improvements within an effective action framework for Kohn-Sham density functional theory. Introduction Calculating the properties of atomic nuclei from microscopic internucleon interactions is one of the most challenging and enduring problems of nuclear physics. However, recent developments in few-and many-body physics together with advances in computational technology give hope that controlled calculations of medium and heavy nuclei starting from a microscopic nuclear Hamiltonian will be forthcoming (see, for example, [1,2,3]). Density functional theory (DFT), which is a self-consistent framework that goes beyond conventional mean-field approaches, offers particular promise for medium to heavy nuclei. The central object in DFT is an energy functional of the nuclear densities that would apply to all the nuclides. Phenomenological functionals have had many successes but lack a microscopic foundation and theoretical control of errors, such that extrapolations to the limits of nuclear binding are uncontrolled. Recent progress in evolving chiral effective field theory (EFT) interactions to lower momentum using renormalization group (RG) methods [4,5,6,7,8,9,10,11,12] (see also [13,14]) makes feasible a microscopic calculation of a universal nuclear energy density functional (UNEDF) [15]. The evolution weakens or largely eliminates sources of non-perturbative behavior in the two-nucleon sector such as strong short-range repulsion and the tensor force from iterated pion exchange [9], and the consistent three-nucleon interaction is perturbative at lower cutoffs [7]. When applied to nuclear matter, many-body perturbation theory for the energy appears convergent (at least in the particle-particle channel), with calculations that include most of the second-order contributions exhibiting saturation in nuclear matter and showing relatively weak dependence on the cutoff [8]. These features are favorable ingredients for a microscopic Kohn-Sham DFT treatment [16,17,18]. Indeed, Hartree-Fock is a reasonable (if not fully quantitative) starting point, which suggests that the theoretical developments and phenomenological successes of DFT for Coulomb interactions may be applicable to the nuclear case for low-momentum interactions. A formal constructive framework for Kohn-Sham DFT based on effective actions of composite operators can be carried out using the inversion method [19,20,21,22,23,24,25,26]. This is an organization of the many-body problem that is based on calculating the response of a finite system to external, static sources rather than seeking the many-body wave function. It requires a tractable expansion (such as an EFT momentum expansion or many-body perturbation theory) that is controllable in the presence of inhomogeneous sources, which act as single-particle potentials. This is problematic for conventional internucleon interactions, for which the single-particle potential needs to be tuned to enhance the convergence of the hole-line expansion [27,28], but is ideally suited for low-momentum interactions. Given an expansion, one can construct a freeenergy functional in the presence of the sources and then Legendre transform order-by-order to the desired functional of the densities. However, these are complicated, non-local functionals and we require functional derivatives with respect to the densities, whose dependences are usually only implicit. While this is a feasible program, it will require significant development to extend existing phenomenological nuclear DFT computer codes. We seek a path that will be compatible in the short term with current nuclear DFT technology but testable and systematically improvable. In this regard, the phenomenological nuclear energy density functionals of the Skyrme form have the closest connection to low-momentum interactions. Modern Skyrme functionals have been applied over a very wide range of nuclei, with quantitative success in reproducing properties of nuclear ground states and low-lying excitations [29,30,31]. Nevertheless, a significant reduction of the global and local errors is a major goal [32]. One strategy is to improve the functional itself; the form of the basic Skyrme functional in use is very restricted, consisting of a sum of local powers of various nuclear densities [e.g., see Eq. (1)]. Fits to measured nuclear data have given to date only limited constraints on possible density and isospin dependences and on the form of the spin-orbit interaction. Even qualitative insight into these properties from realistic microscopic calculations should be beneficial in improving the effectiveness of the energy density functional. A theoretical connection of the Skyrme functional to free-space NN interactions was made long ago by Negele and Vautherin using the density matrix expansion (DME) [33,34,35], but there have been few subsequent microscopic developments. The DME originated as an expansion of the Hartree-Fock energy constructed using the nucleon-nucleon (NN) G matrix [33,34], which was treated in a local (i.e., diagonal in coordinate representation) approximation. In this paper, we revisit the DME using non-local low-momentum interactions in momentum representation, for which G matrix summations are not needed because of the softening of the interaction. When applied to a Hartree-Fock energy functional, the DME yields an energy functional in the form of a generalized Skyrme functional that is compatible with existing codes, by replacing Skyrme coefficients with density-dependent functions. As in the original application, a key feature of the DME is that it is not a pure short-distance expansion but includes resummations that treat long-range pion interactions correctly in a uniform system. However, we caution that the Negele-Vautherin DME involves prescriptions for the resummations without a corresponding power counting to justify them. The idea of using soft, non-local potentials in an expansion starting with Hartree-Fock was explored in the late sixties and early seventies (see, for example, Refs. [36,37,38]). However, soft potentials were generally abandoned because of their inability to saturate nuclear matter at the empirical density and energy per particle. 1 They have been revived in the context of lowmomentum potentials (often referred to as "V low k ") derived by transforming modern realistic NN potentials. The key to their success is the recognition that three-body forces (and possibly four-body forces) cannot be neglected. With lowered cutoffs, the density dependence of the three-body contribution drives saturation [8], which accounts for the apparent past failure in nuclear matter when only two-body contributions were included. The present work is a proof-of-principle demonstration with a roadmap for future developments. We note the following omissions and simplifications. • We restrict ourselves to isoscalar (N = Z) functionals. This is merely for simplicity; generalizations to the full isovector dependence will be presented in the near future. We also defer inclusion of spin-orbit and tensor terms, which will require extensions of the DME treatment of Negele and Vautherin [39]. • We work to leading order in the perturbative many-body expansion (i.e., Hartree-Fock). An upgrade path to include second order and beyond is described in Section 6. • The form for the three-body force is limited to that of chiral N 2 LO EFT. This is consistent with current approximations used with low-momentum potentials, but will need to be generalized to accommodate evolved threebody potentials. • Pairing is essential for the quantitative treatment of nuclei, particularly unstable nuclei. The DME functionals described here can be adapted to include pairing as done in conventional Hartree-Fock-Bogliubov phenomenology. However, a unified treatment is feasible with low-momentum interactions [40,41]. • There are unresolved conceptual issues for applying DFT to a self-bound system [42,43,44] that we will not address here (but which must be dealt with eventually). In addition, projection is not considered. Recently, Kaiser and collaborators have applied the DME in momentum space to a perturbative chiral EFT expansion at finite density to derive a Skyrmelike energy functional for nuclei [45,46,47]. Their analytic expressions for longrange pion contributions can be effectively applied in our formalism to avoid slowly converging partial-wave summations. However, we defer to future work a detailed description of this application and also comparisons with their results. The plan of the paper is as follows. In Section 2, we present the features of density functional theory needed in our treatment and discuss how applying the DME will lead us to a generalized Skyrme-like energy functional. In Section 3, we review the Negele/Vautherin derivation of the DME for non-local (in coordinate space) two-body potentials and make a direct extension to momentum space. The result is a set of simple formulas for the basic coefficient functions in terms of integrals over partial-wave matrix elements of the V low k potential. In Section 4, we extend the DME to include three-body forces, restricting ourselves to local potentials of the form used in chiral EFT at N 2 LO (which is the form used in current approximations to low-momentum NNN interactions). We consider two prescriptions for the three-body part. We present some tests of the DME and sample results in Section 5, highlighting the effects of non-locality, the relative size of NN and NNN contributions, and the impact of different prescriptions for the NNN DME expansion. We conclude with a summary and roadmap for future calculations in Section 6. Density Functional Theory In this section, we give overviews of the standard Skyrme functional and the ideas behind Kohn-Sham DFT for nuclei that we need to set up the energy density functional calculations using the DME. Skyrme Hartree-Fock Energy Density Functional In the conventional Skyrme Hartree-Fock (SHF) formalism, the energy is a functional of the density ρ, the kinetic density τ , and the spin-orbit density J. For simplicity, we restrict the discussion to N = Z nuclei here, so these are isoscalar densities only. This functional is a single integral of a local energy density, which depends in a simple way on these densities, such as [48] Expressions for the Skyrme functional including isovector and more general densities can be found in Ref. [49]. The densities ρ, τ , and J are expressed as sums over single-particle orbitals φ β (x): where the sums are over occupied states and the spin-isospin indices are implicit. (More generally, when pairing is included with a zero-range interaction, the sums are over all orbitals up to a cutoff, weighted by pairing occupation numbers. This complicates finding the self-consistent solution significantly but is not important for our discussion.) The parameters t 0 -t 3 , W 0 , and α determine the functional and are obtained from numerical fits to experimental data. Varying the energy with respect to the wavefunctions with Lagrange multipliers ε β to ensure normalization 2 leads to a Schrödinger-type equation with a 2 Unconstrained variation of the orbitals is the usual textbook formulation of position-dependent mass term [50,48]: where [48] U and the W 0 term is a spin-orbit potential (see Ref. [51] for details). The potentials in Eqs. (6)-(7) and the orbitals from Eq. (5) are evaluated alternately until self-consistency (see Fig. 10). As we will see below, the DME energy functional for N = Z will take the same local form as E SHF , where the energy density function E DME is evaluated with the local densities at R. We follow the Negele/Vautherin notation for E DME and write [33] where A, B, C are functions of the isoscalar density ρ instead of the constant Skyrme parameters, and we have suppressed terms that go beyond the present limited discussion. (When N = Z, these are functions of the isovector densities as well.) Equation (9) implies that the DME form will be a direct generalization of the Skyrme functionals. DFT from Effective Actions Microscopic DFT follows from calculating the response of a many-body system to external sources, as in Green's function methods, only with local, static sources that couple to densities rather than fundamental fields. (Timedependent sources can be used for certain excited states.) It is profitable to Skyrme Hartree-Fock [48]. But this does not hold beyond Hartree level for a general microscopic DFT treatment with finite-range potentials, for which there is an additional constraint to the orbital variation [18]. think in terms of a thermodynamic formulation of DFT, which uses the effective action formalism [52] applied to composite operators to construct energy density functionals [19,20,22]. The basic plan is to consider the zero temperature limit of the partition function Z for the (finite) system of interest in the presence of external sources coupled to various quantities of interest (such as the fermion density). We derive energy functionals of these quantities by Legendre transformations with respect to the sources [53]. These sources probe, in a variational sense, configurations near the ground state. An analogous system would be a lattice of interacting spins, to which we apply an external source in the form of a magnetic field H [52]. Because H = ∂G[M]/∂M and H vanishes in the ground state, G is extremized in the ground state (and concavity tells us that it is a minimum). If H is an inhomogeneous source, the formalism is generalized by replacing partial derivatives by functional derivatives and performing a functional Legendre transform. To derive density functional theory, we follow the same procedure, but with sources that adjust density distributions rather than spins. (We can either introduce a chemical potential or only consider variations that preserve net particle number. We implicitly assume the latter here.) Consider first the simplest case of a single external source J(x) coupled to the density operator ρ(x) ≡ ψ † (x)ψ(x) in the partition function for which we can construct a path integral representation with Lagrangian L [52]. (Note: because our treatment is schematic, for convenience we neglect normalization factors and take the inverse temperature β and the volume Ω equal to unity in the sequel.) The static density ρ(x) in the presence of J(x) is which we invert to find J[ρ] and then Legendre transform from J to ρ: with For static ρ(x), Γ[ρ] is proportional to the conventional Hohenberg-Kohn energy functional, which by Eq. (14) is extremized at the ground state density ρ gs (x) (and thermodynamic arguments establish that it is a minimum [21]). 3 We still need a way to carry out the inversion from ρ[J] to J[ρ]; a general approach is the inversion method of Fukuda et al. [19,20]. The idea is to expand the relevant quantities in a hierarchy, labeled by a counting parameter λ, treating ρ as order unity (which is the same as requiring that there are no corrections to the zero-order density), and match order by order in λ to determine the J i 's and Γ i 's. Zeroth order is a noninteracting system with potential J 0 (x): and Because ρ appears only at zeroth order, it is always specified from the noninteracting system according to Eq. (19); there are no corrections at higher order. This is the Kohn-Sham system with the same density as the fully interacting system. What we have done is to use the freedom to split J into J 0 and J − J 0 , which is essentially the same as introducing a single-particle potential U and splitting the Hamiltonian according to H = (H 0 + U) + (V − U). Typically U is chosen to accelerate (or even allow) convergence of a many-body expansion (e.g., the Bethe-Brueckner-Goldstone theory [27,54,28]). For DFT, we choose it to ensure that the density is unchanged, order by order. Thus, we need the flexibility in the many-body expansion to choose U without seriously degrading the convergence; such freedom is characteristic of low-momentum interactions. (Note: If there is a non-zero external potential, it is simply included with J 0 .) We diagonalize W 0 [J 0 ] by introducing Kohn-Sham orbitals φ i and eigenvalues Then W 0 is equal to the sum of ε i 's. The orbitals and eigenvalues are used to construct the Kohn-Sham Green's functions, which are used as the propagator lines in calculations the W i [J 0 ] diagrams. Finally, we find J 0 for the ground state by truncating the chain at Γ imax , and completing the self-consistency loop: Calculating the successive Γ i 's, whose sum is directly proportional to the desired energy functional, is described in Refs. [20,21,55,23]. When transforming from W i to Γ i , there are additional diagrams that take into account the adjustment of the source to maintain the same density and also so-called anomalous diagrams (these are two-particle reducible). A general discussion and Feynman rules for these diagrams are given in Refs. [20,21,23]. These two types of contribution cancel up through N 3 LO in an EFT expansion with short-range forces using dimensional regularization [23], just as they do in the inversion method used long ago by Kohn, Luttinger, and Ward [56,57] to show the relationship of zero-temperature diagrammatic calculations to ones using the finite-temperature Matsubara formalism in the zero-temperature limit. In the present application of the DME approximation to the effective action DFT formalism, they also cancel and so are omitted entirely. Note that even though solving for Kohn-Sham orbitals makes the approach look like a mean-field Hartree calculation, the approximation to the energy and density is only in the truncation of Eq. (23). It is a mean-field formalism in the sense of a conventional loop expansion, which is nonperturbative only in the background field while including further correlations perturbatively order-by-order in loops. The special feature of DFT is that the saddlepoint evaluation applies the condition that there are no corrections to the density. We emphasize that this is not ordinarily an appropriate expansion for internucleon interactions; it is the special features of low-momentum interactions that make them suitable. To generalize the energy functional to accommodate additional densities such as τ and J, we simply introduce an additional source coupled to each density. Thus, to generate a DFT functional of the kinetic-energy density as well as the density, add η(x) ∇ψ † · ∇ψ to the Lagrangian and Legendre transform to an effective action of ρ and τ [25]: The inversion method results in two Kohn-Sham potentials, with an effective mass 1/2M * (x) ≡ 1/2M − η 0 (x), just like in Skyrme HF. Generalizing to the spin-orbit or other densities (including pairing [40]) proceeds analogously. We note that the variational principle implies that adding sources will always improve the effectiveness of the energy functional. The Feynman diagrams for W i will in general include multiple vertex points over which to integrate. Further, the dependence on the densities will not be explicit except when we have Hartree terms with a local potential (that is, a potential diagonal in coordinate representation). One way to proceed is to calculate the Kohn-Sham potentials using a functional chain rule, e.g., and steepest descent [21]. This is illustrated schematically for a local interaction in Fig. 1. We see that the Kohn-Sham potential is always just a function of R but that the functional is very non-local. If zero-range interactions are used, these diagrams collapse into an expression for J 0 (R) that has no internal vertices, but this is no longer true for diagrams with more than one interaction. Orbital-based methods take the chain rule in Eq. (27) one step further, adding a functional derivative of the sources with respect to the φ i 's (and ε i 's); see Refs. [18,58,59,60] for background on these calculations applied to electronic systems. Eventually, we plan to carry out such calculations to construct the full energy density functional. An alternative in the short term is to approximate W int so that the dependence on the densities (rather than the sources or the orbitals) is explicit. This has two effects: the construction of the Γ i from the W i does not have additional terms and the necessary functional derivatives are immediate. An example of such an approach is the local density approximation (LDA). Here we go beyond the LDA with the density matrix expansion (DME). By expanding the W i about a "center-of-mass" R, we generate a local energy density that is a function of densities (ρ, τ , . . . ) at R. We choose sources to match these densities and carry out the Legendre transformation implicitly; the end result at leading order is calculating W 1 using density matrices built from Kohn-Sham orbitals. We are able to vary with respect to the orbitals because the constraint of a multiplicative Kohn-Sham potential is built in. Then the resulting Kohn-Sham DFT has precisely the form of the Skyrme Hartree-Fock energy functional and single-particle equations. Low-Momentum Potentials The original DME application was based on a Hartree-Fock energy functional calculated with a G matrix, following the Brueckner-Bethe-Goldstone (BBG) method [27,54,28]. The latter involves infinite resummations of diagrams for nuclear many-body theory, as needed to deal with strongly repulsive potentials. In BBG there are two general resummations: the ladder diagrams into a G matrix and the hole-line expansion using the G matrix. Furthermore, to accelerate convergence of the hole-line expansion one needs to carefully choose a single-particle potential. This is problematic for the success of a Kohn-Sham DFT construction, for which the background field (which acts as a singleparticle potential) has a separate constraint, namely to maintain the fermion density distribution. Renormalization group (RG) methods can be used to evolve realistic nucleonnucleon potentials (e.g., chiral EFT potentials at N 3 LO), which typically have strong coupling between high and low momentum (i.e., off-diagonal matrix elements of the potential in momentum representation are substantial), to derive low-momentum potentials in which high and low momentum parts are decoupled. This can be accomplished by lowering a momentum cutoff Λ [4,5,6,7] or performing a series of unitary transformations that drive the hamiltonian toward the diagonal [10,11,12]. The UCOM transformations of Ref. [13] is an alternative to explicit RG methods. In all cases, we have a potential for which only low momenta contribute to low-energy nuclear observables, such as the binding energy of nuclei. For convenience, we'll refer to any of these as V low k . We stress that evolving V low k does not lose relevant information for low-energy physics, which includes nuclear ground states and low-lying excitations, as long as the leading many-body interactions are kept [11]. The long-range physics, which is from pion exchange (and Coulomb), is preserved and remains local, while relevant short-range physics is encoded in the low-momentum potential through the RG evolution. Most important, for any V low k potential the obstacles from strongly repulsive potentials are removed. Hartree-Fock (including three-body interactions) saturates nuclear matter and G matrix resummations are not required (but may still be advantageous). Thus, we have a hierarchy suitable for DFT based on many-body perturbation theory. [Note: While the need for particle-hole resummations remains to be investigated for V low k potentials, results from the analogous UCOM potentials indicate perturbative particle-hole contributions for the energy [14].] While the evolution of V low k potentials does not disturb the locality of initial long-range potentials, the short-range part becomes increasingly non-local. That is, in coordinate representation r|V |r ′ has an increasing range in |r−r ′ |. Thus we must test that the DME is a good expansion for such non-localities. The interactions must include three-body (and higher-body) potentials, which should be consistently evolved with the two-body potential. These are not yet available (although SRG methods show promise of providing them in the near future [10,11,12]), and are instead approximated by adjusted chiral N 2 LO three-body potentials [7]. The validity of this approximation relies on the RG methods modifying only the short-distance part of the potential and is supported by the observation that the EFT hierarchy of many-body forces appears to be preserved by the RG running [7]. The N 2 LO three-body potentials are local and we restrict our present investigation for now to this option. Given this microscopic NN and NNN input, we apply the density matrix expansion to derive an energy density functional of the Skyrme form. DME for Two-Body Potentials in Momentum Space In this section we derive the density matrix expansion for a microscopic DFT starting from low-momentum (and non-local) two-body potentials. From Section 2.2, the relevant object we need to expand is W int , which is expressed in terms of the Kohn-Sham orbitals and eigenvalues that comprise the Kohn-Sham single-particle propagators. For Hartree-Fock contributions of the form in Fig. 2(a), however, only the orbitals enter because the Kohn-Sham Green's function reduces to the density matrix. Similarly, higher-order contributions such as the ladder diagrams in the particle-particle (pp) channel can also be put approximately into this form by averaging over the state dependence arising from the intermediate-state energy denominators. Therefore, while the results in this section are derived for the Hartree-Fock contributions to the functional, they can easily be generalized to include higher-order ladder contributions; this will be explored in a future publication. In essence, the DME maps the orbital-dependent expressions for contributions to W int of the type in Fig. 2(a) into a quasi-local form, with explicit dependence on the local densities ρ(R), τ (R), ∇ 2 ρ(R), and so on. This greatly simplifies the determination of the Kohn-Sham potential because the functional derivatives of Γ int can be evaluated directly. Expression for W HF Before presenting the details of the DME derivation and its application to nonlocal low-momentum interactions, it is useful to first derive in some detail the starting expression for W HF , the Hartree-Fock contribution to W int . This will serve to introduce our basic notation and to highlight the differences between most existing DME studies, which are formulated with local interactions and in coordinate space throughout, and the current approach, which is formulated in momentum space and geared towards non-local potentials. For a local potential, the distinction between the direct (Hartree) and exchange (Fock) contributions is significant, and is reflected in the conventional decomposition of the DFT energy functional for Coulomb systems, which separates out the Hartree piece. For a non-local potential, the distinction is blurred because the Hartree contribution now involves the density matrix (as opposed to the density) and it is not useful to make this separation when the range of the interaction is comparable to the non-locality. 4 Consequently, throughout this section we work instead with an antisymmetrized interaction. For a general (i.e., non-local) free-space two-body potential V , W HF is defined in terms of Kohn-Sham states [Eq. (20)] labeled by i and j, The summation is over the occupied states and the antisymmetrized interaction V = V (1 − P 12 ) has been introduced, with the exchange operator P 12 equal to the product of operators for spin, isospin, and space exchange, P 12 = P σ P τ P r . Note that the dependence of W HF on the Kohn-Sham potential has been suppressed. By making repeated use of the completeness relation W HF can be written in terms of the coordinate space Kohn-Sham orbitals as From the definition of the Kohn-Sham density matrix, so Eq. (30) can be written as where a matrix notation is used in the second equation and the traces denote summations over the spin and isospin indices for "particle 1" and "particle 2". Hereafter we drop the superscripts on V and ρ that indicate which space they act in as it will be clear from the context. Expanding the ρ matrices on Pauli spin and isospin matrices we have where we have assumed the absence of charge-mixing in the single-particle states. The usual scalar-isoscalar, scalar-isovector, vector-isoscalar, and vectorisovector components are obtained by taking the relevant traces, (a) Schematic diagram for approximations to W int that can be expanded using the DME. (b) Coordinates appropriate for the DME applied to the Hartree-Fock potential energy with a non-local potential. In this initial work we will only consider terms in the energy functional arising from products of the scalar-isoscalar (ρ 0 ) density matrices in Eq. (32), which are the relevant terms for spin-saturated systems with N = Z. Thus, we will drop the subscript "0" on the density matrices from now on. After switching to relative/center-of-mass (COM) coordinates (see Fig. 2) and noting that the free-space two-nucleon potential is diagonal in the COM coordinate, the starting point for our DME of the two-body Hartree-Fock contribution from a non-local interaction is where V denotes the antisymmetrized interaction and the trace is defined as The DME derivation of Negele and Vautherin (NV) [33] focuses on applications to local potentials, which satisfy r| V |r ′ = δ(r − r ′ ) r| V |r ′ . While the original NV work included coordinate-space formulas applicable for non-local interactions 5 , for low-momentum potentials it is convenient to revisit and extend the original derivation to a momentum-space formulation. We note that Kaiser et al. have shown how to use medium-insertions in momentum space in their application of the DME to chiral perturbation theory at finite density [45,46,47]. For the momentum space formulation, we first rewrite the density matrices appearing in Eq. (38) as where the vectors appearing on the right-hand side are defined by (see Fig. 2) Introducing the Fourier transform of V in the momentum transfers conjugate to Σ and ∆, (where k ′ , k correspond to relative momenta) gives where we have defined The momenta q and p correspond to the momentum transfers for a local interaction in the direct and exchange channels. That is, the direct matrix element is a function of q and the exchange is a function of p. In contrast, for a non-local interaction the direct and exchange matrix elements depend on both q and p. This is the reason why we do not attempt to separate out the Hartree (direct) and Fock (exchange) contributions to W HF , as is commonly done for local interactions. The trace of Eq. (45) can be written in a more convenient form for our purposes as a sum over partial wave matrix elements, where the primed summation means that it is restricted to values where l+s+t is odd, with k = 1 2 (p+ q) and k ′ = 1 2 (p−q). For simplicity we have assumed a charge-independent two-nucleon interaction, although charge-dependence can easily be included. Density Matrix Expansion The expression Eq. (43) for W HF is written in terms of off-diagonal density matrices constructed from the Kohn-Sham orbitals. Consequently, the corresponding Γ HF [ρ] is an implicit functional of the density. The orbitaldependent Γ HF requires the use of the functional derivative chain rule to evaluate J 1 (R) = δΓ HF [ρ]/δρ(R) in the self-consistent determination of the Kohn-Sham potential, which presents computational challenges and would require substantial enhancements to existing Skyrme HFB codes. Alternatively, we can apply Negele and Vautherin's DME to W HF , resulting in an expression as in Eq. (9) with explicit dependence on the local quantities ρ(R), τ (R), and |∇ρ(R)| 2 , The starting point of the DME is the formal identity [33] where ∇ 1 and ∇ 2 act on R 1 and R 1 , respectively, and the result is evaluated at We assume here that time-reversed orbitals are filled pairwise, so that the linear term of the exponential expansion vanishes. Hence, through second-order gradient terms the angular integral of the density matrix squared is equivalent to the integral of the square of the angle-averaged density matrix. In this way, the leading off-diagonal behavior of the density matrices in W HF is captured by simpler expressions. The angle-averaged density matrix takes the form with s ≡ |s|. Using a Bessel-function expansion (which is simply the usual plane-wave expansion with real arguments), where Q is related to the usual Legendre polynomial by Q(z 2 ) = P 2k+1 (iz)/(iz), we can express the angle-averaged density matrix aŝ where an arbitrary momentum scale k F (R) has been introduced. Equation (51) is independent of k F if all terms are kept, but any truncation will give results depending on the particular choice for k F . In this initial study, we employ the standard LDA choice of Negele and Vautherin: Alternative choices for k F (R) to optimize the convergence of truncated expansions of Eq. (51) and to establish a power counting will be explored in a future paper. Following Negele and Vautherin, Eq. (51) is truncated to terms with n 1, which yields the fundamental equation of the DME, and the kinetic energy density is τ (R) = i |∇φ i (R)| 2 . If a short-range interaction is folded with the density matrix, then a truncated Taylor series expansion of Eq. (53) in powers of s would be justified and would produce a quasi-local functional. But the local k F in the interior of a nucleus is typically greater than the pion mass m π , so such an expansion would give a poor representation of the physics of the long-range pion exchange interaction. Instead, the DME is constructed as an expansion about the exact nuclear matter density matrix. Thus, Eq. (53) has the important feature that it reduces to the density matrix in the homogenous nuclear matter limit, ρ NM (R + s/2, R − s/2) = ρ SL (k F s) ρ. As a result, the resummed expansion in Eq. (53) does not distort the finite range physics, as the long-range one-pion-exchange contribution to nuclear matter is exactly reproduced and the finite-range physics is encoded as non-trivial (e.g., non-monomial) density dependence in the resulting functional. The small parameters justifying this expansion emerge in the functionals as integrals over the inhomogeneities of the density. (See Ref. [24] for examples of estimated contributions to a functional for a model problem.) In the case of a local interaction, the Fock term is schematically given by W F ∼ dR ds ρ 2 (R+s/2, R−s/2)V (s), so a single application of Eq. (53) is sufficient to cast W HF into the desired form. For a non-local interaction the calculation is more involved as two applications of the DME are required. Following Negele and Vautherin, we first rewrite the density matrices appearing in Eq. (38) as where the vectors appearing on the right-hand side are defined by (see Fig. 2) To simplify the notation we define and it is from now on understood that the functions without superscripts depend only on the center-of-mass vector R if the argument is not written explicitly. The first application of the DME corresponds to an expansion in the nonlocality ∆ about the "shifted" COM coordinates R ± , giving Thus, we can expand the product of density matrices in Eq. (38) as where we have dropped terms quadratic in the gradient. We then define and use Eq. (50) to perform a second density matrix expansion on α(ρ From a Taylor expansion of ρ SL (k F Σ) and g(k F Σ) it is evident that the (k F Σ) 2 coefficients of α 2 exactly cancel each other. Because we desire a final expression that reproduces the exact nuclear matter limit (and the presence of the ρ SL (k F Σ) term spoils this limit), we follow the philosophy of Negele and Vautherin and use this leading cancellation to motivate a different rearrangement and truncation of Eq. (51) such that The freedom to rearrange the expansion as in the last equation stems from the fact that the restriction of Eq. (51) to n 1 terms gives a truncated expansion in powers of Σ 2 . The neglected terms, starting with Σ 4 , involve higher derivatives of the density. But having neglected these Σ 4 terms, retaining the other Σ 4 (and higher) contributions that are summed in g(k F Σ) is somewhat arbitrary. Therefore, Negele and Vautherin argue that it is advantageous to use this arbitrariness to "reverse engineer" the expansion so that the exact nuclear matter limit is always exactly reproduced by the leading term [33]. We emphasize that this is a prescription without established power counting or error estimates, which must be assessed in future work. As we show in Section 5, different prescriptions can lead to significant changes in nuclear observables. The gradient terms in the above equation can be evaluated with the aid of the chain rule 6 ∇α(ρ) = ∇ρ ∂α ∂ρ , Recalling that we define the local Fermi momentum as k F = (3π 2 ρ) 1/3 , we can explicitly evaluate the first and second derivatives of α, Pulling it all together, the product of density matrices in Eq. (38) are approximately given in terms of local quantities by 3.3 Evaluation of F (R, q, p) and the DME coupling functions wherep = p/k F etc., and the R-dependence of k F , ρ, and τ has been suppressed. The functions I j (p) and I j (q) are simple polynomials (and theta functions) in the scaled momentap andq: Note that the trivial angular dependence of Eqs. (69)-(73) is a consequence of the angle averaging that is implicit with each application of the DME. With the aid of Eqs. (66)-(73), we can now obtain explicit expressions for the A, B, and C coupling functions by grouping terms appropriately and performing the relevant angular integrals. The expressions for A and B follow immediately and are given by where Tr στ [ V(0, p)] is given by a simple sum of diagonal matrix elements in the different partial waves, The primed sum is over all channels for which l + s + t is odd. The contributions to W HF that have gradients of the local density take the form We can perform a partial integration on the ∇ 2 ρ terms to cast them into the canonical form proportional to only |∇ρ| 2 ; that is, so that In practice it is efficient and accurate to calculate the derivative in Eq. (79) numerically rather than analytically. The expressions for C |∇ρ| 2 and C ∇ 2 ρ are obtained by substituting the relevant terms in F (R, q, p) [see Eqs. (66)-(67)] into Eq. (43) and performing the angular integrals, where V av (q, p) is the angle-averaged interaction, and V(q, p) is given by Eq. (46). Note that care must be taken in the evaluation of dC ∇ 2 ρ /dρ if the vertex V(q, p) is density-dependent or if the local Fermi momentum is not taken to be k F = (3π 2 ρ/2) 1/3 . DME for three-body potentials in momentum space In this section we extend the DME as applied to the Hartree-Fock energy to include three-body force contributions. The low-momentum interactions currently in use do not yet include consistently evolved three-body forces because of technical difficulties in carrying out the momentum-space evolution 7 . Therefore, as an approximation to the evolution, two short-distance low-energy constants in the leading chiral three-body force (this is N 2 LO according to the power counting of Refs. [63,64]) are fit at each cutoff to properties of the triton and 4 He to determine the three-body force. In the present work, we will use this force exclusively and postpone the treatment of general non-local three-body forces, as will be produced by an SRG evolution. Decomposing the three-body potential in the standard fashion [65], where V (i) is symmetric under j ↔ k, we can write the full interaction in terms of one component V = V (1) + P 23 P 13 V (1) P 13 P 23 + P 23 P 12 V (1) P 12 P 23 , and so on. This allows us to simplify Eq. (84) by using where Tr 123 ≡ Tr σ 1 τ 1 Tr σ 2 τ 2 Tr σ 3 τ 3 and a local 3NF has been assumed. Similarly, the scalar-isoscalar contributions to W HF arising from the double-exchanges are given by where the Fourier transformed 3NF components are defined by Here Ω is the volume (which drops out of all final expressions) and q i = k i −k ′ i is the momentum transfer. As discussed above, we approximate the RG evolution of the 3N force with the leading-order chiral 3N force, which is comprised of a long-range 2π-exchange part V c , an intermediate-range 1π-exchange part V D and a short-range contact π π π Fig. 3. The chiral three-body force at N 2 LO according to the power counting of Ref. [68], which has a long-range 2π-exchange part V c (left), an intermediate-range 1π-exchange part V D (middle), and a short-range contact interaction V E (right). interaction V E [63,64], see Fig. 3. The 2π-exchange interaction is where F αβ ijk is defined as while the 1π-exchange and contact interactions are, respectively, In applying Eqs. while the various double-exchange terms give . (103) Note that for the V E and V D terms, it is not necessary to treat separately the single-and double-exchange contributions because their structure is identical due to the nature of the zero-range three-and two-body vertices. Substituting the spin-isospin-traced interactions into Eqs. (90)-(91) and simplifying gives where g E ≡ c E /f 4 π Λ χ and g D ≡ (g A /4f 2 π ) (c D /f 2 π Λ χ ). Similarly, the single-and double-exchange contributions from the 2π-exchange 3NF are given by and W (2x,c) HF where g c ≡ (g A /2f π ) 2 . D-term As with the nucleon-nucleon contributions to W HF , it is convenient to recast the 3NF Hartree-Fock expressions into momentum space. Changing to relative/center-of-mass coordinates (R = (x 2 + x 3 )/2, r = x 2 − x 3 ), the 1πexchange 3N Hartree-Fock contribution becomes where we have defined Applying the DME separately to the product of non-local and local densities in F (R, q) yields [ρ(R + r/2, R − r/2)] 2 ≈ ρ SL (k F r)ρ + r 2 g(k F r) and Combining the two expansions and dropping terms of higher order in the DME, we find where the R-dependence of k F and the local densities has been suppressed. Evaluating the Fourier transform defined in Eq. (109) using the approximate DME expressions and grouping terms according to which coupling function contribute gives where the integrals I 1 (q) and I 2 (q) were defined in Eqs. (69)-(70) andq ≡ q/k F . Together with Eq. (108), we obtain the 1π-exchange 3NF contributions to the EDF coupling functions c-term single-exchange Starting from the single-exchange HF contribution of the 2π-exchange 3NF in Eq. (106), we first change to Jacobi coordinates, followed by the change of momentum variables q ≡ 1 2 (q 2 −q 3 ) and p = q 2 +q 3 . The result is where F 1x (R, p, q) is the Fourier transform of the product of density matrices, with q 2 = p/2 + q and q 3 = p/2 − q. Referring to Eq. (121), we first expand ρ(x 2 , x 3 ) as where R − ≡ R − r 1 /3. Performing a subsequent expansion about R gives where the second application of the DME has been modified slightly to ensure the leading term is exact in the nuclear matter limit. Similarly, the diagonal density ρ(x 1 ) is expanded as Therefore, to second order in the DME we obtain For the usual LDA choice for k F (R), the ∇ 2 (ρ SL ρ) term evaluates to which suggests a grouping of terms in Eq. (126) according to which coupling function they contribute to, Evaluating the Fourier transform in Eq. (121) gives where I 1 -I 5 have been defined in Eqs. (69)-(73) and the new integrals I 6 -I 8 are defined as With explicit expressions for the DME approximation to F 1x (R, p, q) in hand, all that remains is to insert Eqs. (131)-(133) into Eq. (120) and group terms accordingly. The A[ρ] and B[ρ] coupling functions follow immediately and are given by The derivation of the C[ρ] 1x 2π coupling is a bit more complicated because we must first partially integrate all ∇ 2 ρ terms. Writing the gradient contributions to W 1x HF as we obtain Comparing to Eqs. (120) and (133) we find and where the angle-averaged interaction V c 1 c 3 (p, q) is defined as c-term double exchange The double exchange contribution from the c-term is given in Eq. (91). Since this involves a product of three off-diagonal density matrices, the DME is significantly more involved than for the other 3N contributions. In order to assess the sensitivity to the details of the (non-unique) DME prescription, we consider two different expansion schemes for these contributions, which we denote by DME I and DME II. We expect the differences between the two schemes should be "small" if the master formula Eq. (53) is indeed a controlled expansion, and if results are insensitive to the different angle-averaging used in the two schemes. DME I We start by noting that repeated application of the master formula Eq. (53) factorizes the three-body center-of-mass and relative coordinate dependence as where O l (R) is some monomial of the local densities and i, j, m are a permutation of 1, 2, and 3. The relative coordinate functions can be written in terms of their Fourier transforms, e.g., λ(k m , k ij ) = dr m dr ij (2π) 6 e ikmrm e ik ij r ij λ(r m , r ij ) . The second term has the coefficient ∇ 2 ρ Integrating over the δ-functions leads to a lengthy expression that we will not give here. Using partial integration we can finally write the total expression in the form The particular order of integrations we have carried out gives factors of k F appearing as UV cutoffs in the remaining integrals. Such a simplification arises for all contributions to the HF energy and the resulting integrals can therefore be easily integrated numerically despite the relatively large number of integration variables. Key to the prescription used here is the Fourier transform of the expanded density matrices to momentum space. Due to its generality, this approach can easily be extended to the calculation of higher-order contributions to the DME. A similar approach was introduced in Ref. [45], where the authors used the Fourier transform of the expanded density matrix to generate medium insertions for a diagrammatic calculation of the nuclear energy density functional using chiral perturbation theory. DME II The DME I prescription outlined above differs from the original NV approach in two respects. First, we do not rearrange and truncate the expansion by hand to ensure that the nuclear matter limit is exactly reproduced. Second, the DME I prescription keeps cross-terms in the product of the three expanded density matrices that are formally of higher order in the NV approach. In order to quantify these effects and assess whether the expansion is under control, we have also performed the expansion where we strictly follow the original NV philosophy (DME II). We also note the differences in angle-averaging that arise with the different DME schemes. In the DME I approach, each ρ(x i , x j ) is first expanded in the natural Jacobi coordinates (R, r k , r ij ), and then the three expanded density matrices are expressed in one common set of Jacobi coordinates. In the DME II prescription, we follow a different path by expressing the product of density matrices in one common set of Jacobi coordinates from the outset. The subsequent DME implies a different angle-averaging, since only one density matrix is expanded in its natural Jacobi basis. We do not include the derivation of the DME II equations here, as it proceeds in much the same spirit as for the DME I, although we note that the final expressions are considerably more cumbersome since one finds different λ l functions depending on whether one is expanding the ρ(x i , x j ) corresponding to the chosen Jacobi coordinates or one of the other two density matrices. Results In this section, we make some basic tests of the DME. We have two modest goals: to check that the DME does not degrade when applied to non-local, low-momentum NN potentials and to make a first assessment of the relative contributions of two-and three-body interactions. For the first goal, we approximate the self-consistent Hartree-Fock ground-state wave function by a Slater determinant of harmonic oscillator single-particle wave functions. Using these wave functions, we compare the DME approximation for the energy of a schematic model NN potential to the exact result where the finite range and non-locality of the interaction is treated without approximation. Then with the same wave function we check the error as we change the resolution (cutoff) of a realistic low-momentum potential. For the second goal, we exhibit some numerical results for the DME coefficient functions to illustrate the non-trivial density dependence and to show the effects of different prescriptions for the three-body DME. These are meant only to set a baseline because, at a minimum, we should include second-order contributions (i.e., beyond Hartree-Fock) before expecting quantitative predictions for nuclear structure or analyzing the cutoff dependence of the energy functional. However, even at this stage it should be meaningful to use these results to compare the relative contributions of two-and three-body interactions. Although the original DME paper introduced formalism for non-local potentials [33], previous investigations of the effectiveness of the DME studied only local potentials (or local approximations to the G matrix). Because the low-momentum potentials used here can be strongly non-local, we first test whether the extra expansion required degrades the accuracy of the DME. We consider a model potential: with v a Gaussian potential, so the range is set by α. The range of the nonlocality is set by β; in the limit β → 0, V (r, r ′ ) → v(r/α)δ 3 (r − r ′ ). In Fig. 4, the effects of non-localities on the accuracy of the DME for integrated quantities (e.g., V ) is illustrated using this potential. We use a harmonic oscillator model of 40 Ca (i.e., the ground-state wave function is a Slater determinant of harmonic oscillator orbitals) and calculate the expectation value of the non-local V (r, r ′ ) in the Hartree-Fock ground state. For a given range α, we compare the error for a non-locality β to the error with β = 0. It is evident that the effect of the non-locality on the degradation of the DME is unimportant up to at least twice the range. Even when α is taken as small as the typical range of a repulsive core there should be no problem for the range of low-momentum cutoffs typically considered. The errors per nucleon for the DME with the same model ground state but with a realistic low-momentum nucleon-nucleon potential (starting from the chiral N 3 LO potential from Ref. [69]) are shown in Fig. 5 for N = Z nuclei (without Coulomb) for A = 16, 40, and 80. It is evident that the cutoff dependence of the error is very slight until Λ < 2 fm −1 . Because the evolution of the potential does not alter the long-distance part, the weak cutoff dependence of the error implies that the short-distance contribution is very well reproduced and provides further confirmation that non-locality (which grows with decreasing Λ) is not a problem for the DME (note that long-range local interactions remain local). These errors are also smaller than errors found in early DME tests. The model calculations in Fig. 5 treat both direct (Hartree) and exchange terms with the DME. It was recognized long ago that the DME is ill-suited for long-range direct terms, which should be calculated exactly instead [34]. The dashed line in the figure shows the error for A = 40 but using the NLO potential, which does not have any long-range contributions to the direct scalar term. As expected, the error is significantly smaller than the N 3 LO result, due at least in part to the crude treatment of the N 3 LO long-range direct contribution. Since the long-range local terms can be isolated in the poten- tial, it is feasible to perform exact Hartree evaluations of these pieces when implemented in a DFT solver. We turn now to the isoscalar A and B functions, which are the only contributors to uniform, symmetric nuclear matter. The energy per particle as a function of density ρ is given by: The individual contributions from A and B at the Hartree-Fock level are plotted in Figs. 6 and 7, and combined into E/A in Fig. 8. These use a twobody V low k interaction evolved from the Argonne v 18 potential [70] with a sharp cutoff at Λ = 2.1 fm −1 and a chiral N 2 LO three-body force with constants fit to the binding energies of the triton and 4 He [67]. Results are given using the NN contribution only and with NNN included, using the two prescriptions for the NNN double-exchange contribution (DME-I and DME-II) described in Section 4. From Figs. 6 and 7, one sees that the ratios of contributions from three-body to two-body tend to increase monotonically with density, but are still only about 20-30% at saturation density. This is consistent with general expectations from chiral power counting. Fig. 7. Contribution to the energy per particle in nuclear matter from the isoscalar coefficient function B(ρ) as a function of the density from the DME applied to the Hartree-Fock energy calculated using V low k with Λ = 2. actual nuclei in somewhat lower, there is reason to believe the expansion in many-body forces is under control. Past estimates of contributions to Skyrme energy functionals based on naive dimensional analysis [71] suggested large contributions from three-body and even four-body interactions. The present results imply more modest contributions, but evaluating the chiral N3LO fourbody contribution at Hartree-Fock will be needed for a definitive assessment. The comparison of the DME-I and DME-II curves gives us an estimate of the truncation error in the expansion applied to the NNN terms because these prescriptions differ in the contributions of higher-order terms in the expansion. Indeed, we have verified that suppressing these terms by hand brings the predictions for the A and B coefficients into agreement. The qualitative difference for the NNN-only contribution to B is large, but the actual coefficient itself is small, so this should not be alarming. However, because the combination of A and B and the kinetic energy to obtain the nuclear matter energy per particle involves strong cancellations, the spread in Fig. 8 is large on the scale of nuclear binding energies. These differences motivate a generalization of the Negele-Vautherin DME following the discussion in Ref. [72]. In this approach, the expansion of the scalar density matrix takes the factorized form where and k F is a momentum scale typically taken to be k F (R) as in Eq. (52). Similar expansions are made for the other components of the density matrix. Input from finite nuclei can be used to determine the Π n functions, which can be viewed as general resummations of the DME expansion; see Section 6 for a brief overview. Finally, in Fig. 9, the coefficient function C(ρ) is plotted as a function of density (ρ = 2k 3 F /3π 2 ). Even at the highest density, the three-body contribution is a manageable correction to the two-body result. The NN + NNN result is in qualitative agreement with the results of Fritsch and collaborators who included two-pion exchanges with explicit ∆-isobars [47], although the three-body contributions in the current work are somewhat larger than effects arising from explicit ∆-isobars. For this coefficient function the difference between DME-I and DME-II is comparatively small. . Isoscalar coefficient function C(ρ) as a function of the density from the DME applied to the Hartree-Fock energy calculated using V low k with Λ = 2.1 fm −1 . The result including the NN interaction alone is compared to NN plus NNN interactions for two DME expansions (I and II, see text). Summary In this paper, we have formulated the density matrix expansion (DME) for low-momentum interactions and applied it to a Hartree-Fock energy functional including both NN and NNN potentials. The output is a set of functions of density that can replace density-independent parameters in standard Skyrme Hartree-Fock energy density functionals. This replacement in Skyrme HF computer codes is shown schematically in Fig. 10. Only one section of such a code would be replaced, and it takes the same inputs (single-particle eigenvalues and wave functions for the orbitals and the corresponding occupation numbers) and delivers the same outputs (local Kohn-Sham potentials). Furthermore, the upgrade from Skyrme energy functional to DME energy functional can be carried out in stages. For example, the spin-orbit part and pairing can be kept in Skyrme form with the rest given by the DME. Details of such a DME implementation will be given elsewhere. A further upgrade to orbitalbased methods would also only modify the same part of the code, although the increased computational load will be significant. The numerical results given here are limited and do not touch on many of the most interesting aspects of microscopic DFT from low-momentum potentials. Topics to explore in the future include: • Examine the resolution or scale dependence of the energy functional by evolving the input low-momentum potential. There will be dependence on the cutoff Λ (if using V low k ) or the flow parameter λ (if using V srg ) both from omitted physics and from intrinsic scale dependence. Calculations at least to second order are needed to separate these dependencies. • Examine the isovector part of the functional. We can isolate the contributions from the more interesting long-range (pion) parts of the free-space interactions, allowing us to obtain analytic expressions for the dominant density dependence of the isovector DME coupling functions. • Study the dependence of spin-orbit contributions on NN vs. NNN interactions. This includes the isospin dependence as well as overall magnitudes. The NN spin-orbit contributions arise from short-range interactions, whereas NNN contributions arise from the long-range two-pion exchange interaction.Therefore, we expect to find a rather different density dependence for the two types of spin-orbit contributions. • Explore the contribution of tensor contributions, which have recently been reconsidered phenomenologically [73,74]. • Understand the scaling of contributions from many-body forces. In particular, how does the four-body force (which is known at N 3 LO in chiral EFT with conventional Weinberg counting) contribution at Hartree-Fock level impact the energy functional? The calculations presented here are only the first step on the road to a universal nuclear energy density functional (UNEDF) [15]. There are both refinements within the DME framework and generalizations that test its applicability and accuracy. While many of these steps offer significant challenges, in every case a plan is in hand to carry it out. The DME can be directly extended to include second-order (or full particle-particle ladder) contributions by using averaged energies for the energy denominators. However, a more systematic approximation is under development using a short-time expansion [75]. More difficult future steps include dealing with symmetry breaking and restoration in DFT for self-bound systems, dealing with non-localities from near-on-shell particle-hole excitations (vibrations), and incorporating pairing in the same microscopic framework (see Ref. [41]). In extending our calculations we will also modify the standard DME formalism from Ref. [33] that we have followed in the present work. The formalism has problems even beyond the truncation errors from different DME prescriptions already discussed in Sections 4 and 5, the most severe being that it provides an extremely poor description of the vector part of the density matrix. While the standard DME is better at reproducing the scalar density matrices, even here the errors are sufficiently large that the disagreement with a full finite-range Hartree-Fock calculations can reach the MeV per particle level. Gebremariam and collaborators have traced both of these problems to an inadequate phase space averaging (PSA) used in the previous DME approaches [39]. In the derivation of the DME, one incorporates average information about the local momentum distribution into the approximation. The Negele-Vautherin DME uses the phase space of infinite nuclear matter to perform this averaging. However, the local momentum distribution in finite Fermi systems exhibits two striking differences from that of infinite homogenous matter. First, meanfield calculations of nuclei show that the local momentum distribution exhibits a diffuse Fermi surface that is especially pronounced in the nuclear surface. Second, the local momentum distribution is found to be anisotropic, with the deformation accentuated in the surface region of the finite Fermi system. To incorporate both of these missing effects into the DME, Gebremariam et al. have constructed a model for the local momentum distribution based on previous studies of the Wigner distribution function in nuclei [39]. The model parameters are adjusted so that the DME accurately reproduces both integrated quantities, such as the expectation value of the finite-range nucleonnucleon interaction taken between Slater determinants from self-consistent Skyrme-Hartree-Fock calculations, as well as the density matrices themselves. The improvements are substantial, typically reducing relative errors in integrated quantities by as much as an order of magnitude across many different isotope chains. The improvement is especially striking for the vector density matrices. We will test this improved DME in future investigations. The tests of the DME will include benchmarks against ab initio methods in the overlap region of light-to-medium nuclei. Additional information is obtained from putting the nuclei in external fields, which can be added directly to the DFT/DME functional. Work is in progress on comparisons to both coupled cluster and full configuration interaction calculations. A key feature is that we use the same Hamiltonian for the microscopic calculation and the DME approximation to the DFT. The freedom to adjust (or turn off) external fields as well as to vary other parameters in the Hamiltonian permits detailed evaluations of the approximate functionals. In parallel there will be refined nuclear matter calculations; power counting arguments from re-examining the Brueckner-Bethe-Goldstone approach in light of low-momentum potentials will provide a framework for organizing higher-order contributions. These investigations should provide insight into how the energy density functional can be fine tuned for greater accuracy in a manner consistent with power counting and EFT principles.
2008-11-26T01:15:05.000Z
2008-11-26T00:00:00.000
{ "year": 2009, "sha1": "06a8f8533132e582f56aff0ff44a66079cb2b0e7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0811.4198", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "06a8f8533132e582f56aff0ff44a66079cb2b0e7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
186684910
pes2o/s2orc
v3-fos-license
System approach to modeling of industrial technologies The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions. Introduction At present, it is customary to perform design and creation of complex systems from positions of system engineering. A system engineer is a specialist who can combine knowledge fields from different technical and scientific areas in one system and use this interdisciplinary, integrated approach to solve a wide range of technical and other problems when implementing a large complex project [1]. No matter how trivial it sounds, the Russian Federation lags behind developed countries for decades in training of this kind of specialists [2]. The situation would not be so critical if these human engineering resources were not leading doers in implementing the largest and most significant projects, both technical and non-technical, from the creation of the Large Hadronic Collider to the organization of the European Union [3]. Thus, today there are a large number of specialists in various subject areas, which need to be taught to systems thinking to be able to use a systematic approach for solving problems in their field and related fields. It is clear that such task cannot be solved in a short time, without strong support at a high level, especially taking into account the fact that in Russia few people know what a system engineer is, and what is difference between an ordinary engineer and system engineer. But something can be done. Research object It is proposed to solve this problem by the introduction of an intuitively understandable, easy-to-use software and information tool, which will be possible to answer several categories of tasks at once. In general, system engineering solves a vast range of problems, the description of which is beyond the scope of this paper. However, it is necessary to note the following. In the past, engineering began with the fact that there were no standards. Then the appearance of standardized drawings, diagrams, charts, and other formalized information was noticed. Today one cannot imagine modern engineering without a system of technical standards, unified rules, text and graphics documents. So, engineering as such is now in some transition stage. For a long time, the methods of machine processing of all this engineering information are studied [4]. Thus, until now, one has had no such opportunities. System engineering proposed standards for the design, creation, maintenance of systems and other activities with them. At the same time, a large number of so-called unified formal languages for system creation (modeling) appeared [5] to process this information. Many of them have practical application only in information systems, but there are universal languages. As a rule, they represent a set of graphic symbols with a specific semantics tied to each character. A single standard for these languages is not available at the moment. The emergence of formal languages for system engineering did not in itself lead to explosive growth of the industry and was not a panacea for solving the problems facing scientists and engineers. Therefore, it is advisable to go in a slightly different way. One need a tool which includes not only modeling rules but also the content itself. The working title of the project is "Technology constructor." Methods It is advisable to create a tool for system engineering using methods of system engineering. By the postulates of system engineering, it is necessary to determine the requirements for the system at the very beginning. Moreover, it is essential to answer the question, who are its stakeholders, i.e., interested persons, users and other persons and organizations having direct or indirect relation to the project. Table 1 lists the main stakeholder groups and the reasons for their interest in the developed tool. Availability and ease of studying the features of technics and technologies application. The system provides correct submission of information. Teachers The ability to competently provide students with information about technological processes using a system approach. The ability to quickly find data to improve their skills. Scientists The possibility of creating a unified system for publishing and reviewing scientific papers using information part of the system as a basis. The possibility of applying various mechanisms of scientific search to the information part of the system. Systematization and centralized accounting of information on scientific novelty. The system provides suppression of plagiarism attempts. Engineers The system can be used to solve engineering problems and improve technology using a wide range of methods. These can be methods of creative search intensifying (morphological analysis, functional analysis, TRIZ methods and others), statistical methods of data science, etc. The use of these methods will provide new technical solutions. Businesses This system is a convenient tool for complete and detailed modeling of the production process and further technical and economic analysis of the model obtained to reduce production costs and improve the technological chain. The system enables one to promote products and services within its framework and to solve identified problems and eliminate weaknesses in the technological process. State structures The possibility of developing unified standards for registration and verification of plagiarism and scientific novelty in academic and technical information. The possibility of using the system for reverse engineering to solve import substitution problems. Each of these stakeholders can put forward their requirements for the developed system. In general, "Technology constructor" capabilities can meet these demands. They are as follows: 1. The system creates a complete picture of the studied technology. This figure includes both primary materials and equipment, and operation principles, physical and chemical effects on which the process is based, as well as energy and labor costs both in individual sections of the cycle and for the entire technological process as a whole. 2. The method information structured allow one to use it in two ways: both for working with it and for machine processing, the ease of which is ensured by uniform structuring according to a single template. Methods for processing can be very diverse. 3. All information is entered into the system and stored simultaneously in several languages, including international English, thus providing the opportunity for broad access to the project for specialists from different language spaces. Let us note that overcoming the language barrier is one of the most urgent tasks for Russian scientists and engineers. So far no solution would suit everyone. "Technology constructor" will solve this problem in the following way. The system is initially formed as multilanguage and does not require further translation. At the moment, this is a secure and reliable way to provide the project with a broad audience coverage. Of course, the instrument itself cannot claim to cover all stages of creating complex systems. But it can flawlessly perform the functions of an information resource that is convenient for machine processing by various methods. They can be methods of creative search intensifying, such as morphological, functional analysis, any tricks in the framework of TRIZ, up to the methods of mathematical statistics [6], or as it is now commonly called -Data Science. As for the last, one knows that Data Science ideally manipulates raw data after appropriate processing. But in our case, there is the opportunity to apply the methods of data science to a template filled with information. One should not be confused by an element of subjectivism, which will undoubtedly take place with this approach. It occurs when processing any raw data. Moreover, by the Data Science rules, the researcher should always ask the questions "What is the goal of our search? What patterns do we expect to find?", but in this case, we are free to set the task quite individually. The development of "Technology constructor" raises questions of creating and reviewing content. A multidisciplinary professional team, which should expand as information about an increasingly wide range of technologies accumulates, can be best in content reviewing. The way system information structured is as follows. All data include several categories, which in turn have a hierarchical structure. They are a software and information complex with the ability to process computer information stored in memory. This complex includes models of various technologies, as well as equipment, materials, and effects of diverse nature that occur when working with this particular technique. Thus, the central category of data is "Technology." This class in its turn has some daughter elements. One can scale the system. The technologies in question can be different. However, One does not try to cover everything. Let us deal exclusively with industrial technologies. Within the framework of this project, the authors divided all industrial technologies into four categories: Further, one should keep in mind that any technology must have something at the input (specific resources), also the process itself, and something at the output (results). It follows from the very logic of the application of any technology (see Figure 2). If one considers this scheme in more detail, then one comes to the fact that technology is a set of methods that make up the technological process, used to transform raw materials and energy through equipment and tools involving personnel of varying degrees of skill. Thus, at the input, let us have the following set of categories of this scheme (see Figure 3). Table 2 what is meant by each of these categories, and what they included. As can be seen from the table, in the first group of "Systems" there is any equipment, which in its turn can be defined as a system or as a hierarchical structure of simple and complex systems from the point of system engineer's view. In this case, this is convenient since one can disclose information about these systems to any level of detail. Let us clarify in If one cannot classify something as "Systems", one should put it into the category of "Materials." Some materials represent a fuel, so one should put them in the category of "Energy" to analyze information about the technological process regarding energy costs minimizing) [7]. Systems Any equipment and tools from a screwdriver to an industrial enterprise entirely [8]. Materials Consumables and raw materials themselves in various aggregate states, parts and components, oils and greases, combustible materials, if one uses them as raw materials, but not for fuel. Energy All kinds of energy, including electrical, heat in the form of fuel, solar power, etc. Staff It means specialists with a set of specific knowledge and skills. The category of "Energy" is fundamental to optimize costs and increase the energy efficiency of production. In general, the energy conversion in the system will look like this: where E out -energy received at the output of the process; E in -energy at the input to the technological process. It can be represented both in the form of electrical, thermal energy and in the form of organic fuel; E d -energy lost or dispersed during the process. The indicator of E w will determine the efficiency of the system and can serve as a criterion for the energy efficiency of technology. Results and discussion Technical systems are visualized within the "Technology constructor" to facilitate the search for and elimination of specific problematic technology sites for individual nodes [9]. The system adds data on the functions of individual nodes in a unified format for processing this information using functional analysis. Also, the "Technology constructor" stores information about the problematic locations of the technology, which is structured as follows: a) Individual components problems of the equipment used; b) The problems of technological process particular stages; c) Common problems not attributed to a specific node or stage of technology, or temporarily unclassified issues [ There is information about the technological process, which is the essence of this technology, in the "Technology constructor." There are also characteristics of technology that do not fall into the listed templates, but they are not less necessary for working with scientific and technical information. They include the central principle, the technological process based on (physical or chemical effect), tasks solved using technology, order and scope of application, process parameters calculations, and others. Conclusion In fact, the basis of any technology is the use of one or more physical or chemical effects. It is especially important for analyzing the various types of energy transformation that takes place when specific effects occur. This approach is a necessary element for modeling any industrial technology. Thus, the system summarizes data on all essential parameters of the technological process. The method will finally allow answering many questions that arise before an engineer or scientist in the process of technical creativity. The most important of these issues is technology improvement based on taking into account the direction of energy and material transformation, as well as the quantitative evaluation of these alterations. In the long term, this will allow us to approach the issue of engineering and scientific search by system modeling methods, providing the possibility of solving part of the tasks using computer systems.
2019-06-13T13:15:55.942Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "0b0ff5c0acd4484b18cb3946e0d5342d99d7faea", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/327/4/042109", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "edf4c0fb4891661bece377b7700ea0f0cb9e76e1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Engineering" ] }
251105164
pes2o/s2orc
v3-fos-license
The Sherwood-Relics simulations: overview and impact of patchy reionization and pressure smoothing on the intergalactic medium We present the Sherwood-Relics simulations, a new suite of large cosmological hydrodynamical simulations aimed at modelling the intergalactic medium (IGM) during and after the cosmic reionization of hydrogen. The suite consists of over 200 simulations that cover a wide range of astrophysical and cosmological parameters. It also includes simulations that use a new lightweight hybrid scheme for treating radiative transfer effects. This scheme follows the spatial variations in the ionizing radiation field, as well as the associated fluctuations in IGM temperature and pressure smoothing. It is computationally much cheaper than full radiation hydrodynamics simulations and circumvents the difficult task of calibrating a galaxy formation model to observational constraints on cosmic reionization. Using this hybrid technique, we study the spatial fluctuations in IGM properties that are seeded by patchy cosmic reionization. We investigate the relevant physical processes and assess their impact on the z>4 Lyman-alpha forest. Our main findings are: (i) Consistent with previous studies patchy reionization causes large scale temperature fluctuations that persist well after the end of reionization, (ii) these increase the Lyman-alpha forest flux power spectrum on large scales, and (iii) result in a spatially varying pressure smoothing that correlates well with the local reionization redshift. (iv) Structures evaporated or puffed up by photoheating cause notable features in the Lyman-alpha forest, such as flat-bottom or double-dip absorption profiles. INTRODUCTION The ionizing UV emission produced by the first stars and galaxies in the high-redshift Universe transforms the surrounding IGM from a neutral gas to a highly ionized plasma. At the same time, it is photoheated from a few Kelvin to ∼ 10 4 K. As this process of cosmic reionization proceeds, ionized regions grow, start to overlap, and eventually become volume filling (e.g., see reviews by Rauch 1998;Meiksin 2009;McQuinn 2016). This inherently inhomogeneous process results in an almost fully ionized IGM. Neutral gas is only present in dense regions that can self-shield from the cosmic UV background. The photoheating provided by cosmic reionization increases the gas pressure in the IGM such that it becomes dynamically relevant on small scales. While gravity still dominates the formation of structures in the baryonic density field on large scales, small scales in the IGM are notably affected by the hydrodynamic reaction to the photoheating. Overpressurized regions expand, thereby erasing structure on small scales (e.g., Gnedin & Hui 1998;Theuns et al. 2000;Kulkarni et al. 2015;Rorai et al. 2017;Wu et al. 2019;Katz et al. 2020;Nasir et al. 2021). How exactly this process proceeds is sensitive to the amount of heating provided by reionization, but also to the initial properties of the neutral gas, such as the relative streaming velocity between baryons and dark matter and the amount of preheating by X-rays penetrating into neutral regions (see, e.g., Hirata 2018;Park et al. 2021;Long et al. 2022). Understanding pressure smoothing is relevant both for explaining the formation (or lack of formation) of galaxies in low mass halos, as well as the properties of the IGM during and after cosmic reionization. Here, we concentrate on the latter, i.e. on studying the immediate impact of patchy cosmic reionization on the IGM, as well as the relic signatures of patchy reionization that persist for a significant amount of time in the post-reionization IGM (e.g., Lidz The IGM is most readily observed in absorption, in particular using the Lyman-α line of neutral hydrogen, which imprints a forest of absorption lines on the spectra of background quasars. The structure of this Lyman-α forest on small scales is affected by the instantaneous temperature of the IGM via the Doppler broadening of the lines, by its thermal history via the pressure smoothing, and by the small scale structure in the dark matter density field via gravitational interaction. The latter has been exploited by using the Lyman-α forest on small scales to probe the free streaming scale of dark matter particles (e.g., Iršič et al. 2017;Rogers & Peiris 2021), which for thermal relic dark matter is directly related to the dark matter particle mass. Such Lyman-α forest constraints on dark matter are best derived at high-redshift when the relevant scales are not yet completely dominated by mode coupling due to non-linear structure growth and when the Lyman-α forest is sensitive to low (less non-linear) densities. Furthermore, for fixed comoving free-streaming length the cut-off in velocity space is at larger scales/smaller k at higher redshift, and thus -at least in principle -easier to detect. At very high redshift, when entering the epoch of reionization, the Lyman-α forest becomes completely opaque. Hence, the sweet spot for dark matter constraints is in the redshift range 4 z 5.5, or in other words shortly after reionization. Understanding the thermal state and pressure smoothing of the IGM in this epoch is, hence, also important for probing dark matter. Interpreting observations of the IGM typically relies heavily on comparison to cosmological hydrodynamical simulations. The main differences between the various simulations used for this purpose are how the ionizing sources, i.e. the galaxies, and the ionizing radiation fields are treated. In the most simple approach, these are not treated explicitly (see, e.g., Bolton et al. 2017;Rossi 2020;Chabanier et al. 2020;Walther et al. 2021;Villasenor et al. 2021, for recent works). Instead an external model for the ionizing UV background (UVB) is used, typically a spatially homogeneous, time varying UVB model. Such models are obtained by integrating the ionizing emission of stars and active galactic nuclei based on empirical constraints of their abundance and of the opacity of the IGM (e.g., Faucher-Giguère et al. 2009;Haardt & Madau 2012;Oñorbe et al. 2017;Puchwein et al. 2019;Faucher-Giguère 2020). When focusing on the low-density IGM probed by the Lyman-α forest, it is possible to neglect other forms of feedback from galaxies, as it does typically not reach the relevant low-density regions of the IGM at z 4 (although feedback will start to play a role by z ∼ 2, see e.g. Theuns et al. 2002;Viel et al. 2013;Chabanier et al. 2020). Hence, when using an external UVB model, one can reasonably ignore galaxy formation altogether in cosmological hydrodynamical simulations of the low-density IGM (Viel et al. 2004). Despite their simplicity and low computational cost, such simulations are also in remarkably good agreement with the observed properties of the Lyman-α forest at z 4, i.e. well after the end of reionization (e.g. Bolton et al. 2017). During patchy cosmic reionization, the ionizing radiation field is, however, highly inhomogeneous and simulations with a spatially homogeneous UVB model fail to reproduce the observed properties of the Lyman-α forest, such as the fluctuations of its opacity on large scales (e.g. Becker et al. 2015;Bosman et al. 2018;Eilers et al. 2018;Zhu et al. 2021;Bosman et al. 2022). To overcome these problems, the spatial distribution of ionizing sources and the resulting spatial fluctuations in the UV radiation field need to be modelled. This can either be done on the fly in full galaxy formation simulations with radiative transfer coupled to the hydrodynamics, or by doing the radiative transfer in post-processing, typically using a simpler model of the ionizing source populations. The latter approach is computationally much cheaper and can avoid all the complications of realistically modelling the galaxy population and the escape of ionizing radiation. Empirically constrained source models can be used to "paint" ionizing sources on the simulated dark matter halos. A weakness of this approach is that it may miss many details of the source population, such as the bursty nature of ionizing radiation production and escape. Nevertheless, the source luminosities can be calibrated such that the amount of ionizing radiation reaching the IGM is adequate for bringing the reionization history and the simulated properties of the Lyman-α forest in agreement with observational constraints. Calibrating in this manner makes a detailed modelling of the ionizing radiation escape from high-density regions in the simulation unnecessary. Hence, the post-processing radiative transfer can be done at coarser resolution using an uniformly spaced grid. This also allows efficient parallelization on GPUs (e.g., Aubert & Teyssier 2010), making this approach numerically cheap. Such calculations are very successful in matching the properties of the Lyman-α forest on large scales during and directly after cosmic reionization (e.g., Kulkarni et al. 2019). On the downside, post-processing radiative transfer neglects the hydrodynamic reaction of the IGM to the inhomogeneous photoheating. The thermal and ionization states are only re-calculated in post processing so that the heating is not coupled to the hydrodynamics. Hence, a self-consistent modelling of pressure smoothing is not possible with this approach. Furthermore, the thermal and ionization states are typically stored on a static grid so that the thermal energy injected by photoheating as well as the ionization state are not advected with the gas flow. Finally, post-processing radiative transfer codes that follow only heating by the UV radiation field miss other heating mechanisms that are present in a hydrodynamic simulation, such as shock heating in and around forming structures. All of these issues can be fixed by doing the radiative transfer in a fully coupled manner on the fly. Such radiation-hydrodynamics simulations can be done with external ionizing sources to study the reaction of the IGM (e.g., Park et al. 2016;D'Aloisio et al. 2020) or following the formation of high-redshift galaxy populations and the escape of ionizing radiation from them self-consistently. The latter is, however, computationally very expensive. Usually some corners need to be cut to make this feasible at all, e.g., doing the radiative transfer with a reduced speed of light. In addition, the modelling of galaxy formation in such simulations is highly uncertain, in particular during the epoch of reionization where only very limited observational constraints on the galaxy population are available, making sanity checks on the simulated population difficult. This is further exacerbated by the fact that the escape of ionizing radiation depends on the detailed properties of the interstellar medium, which is very challenging to model faithfully in cosmological simulations. Despite these difficulties, there has been major progress in this direction in the last years. For example, the CROC (Gnedin 2014), Sphinx (Rosdahl et al. 2018), Technicolor Dawn (Finlator et al. 2018), CoDa (Ocvirk et al. 2016(Ocvirk et al. , 2020Lewis et al. 2022) and Thesan Garaldi et al. 2022;Smith et al. 2022) collaborations were able to perform fully-coupled radiationhydrodynamics simulations of cosmological volumes along with a modelling of galaxy formation. In this work, we focus on the low-density IGM probed by the Lyman-α forest. We build on our previous Sherwood simulation project (Bolton et al. 2017), which showed excellent agreement with various statistics of the Lyman-α forest at 2 z 5. Our aim is to produce a simulation suite that samples a wide range of astrophysical and cosmological parameters and that can be used for parameter inference when compared to the observed Lyman-α forest. At the same time, we aim to overcome limitations at the high redshift end where the relic signatures that a recently completed patchy cosmic reionization has imprinted on the IGM become increasingly important. To this end, we introduce a new hybrid radiative transfer/hydrodynamical simulation technique that aims at combining many of the positive aspects of post-processing radiative transfer and fully-coupled radiation-hydrodynamics simulations while avoiding some of their major downsides. In Sec. 2 of this manuscript, we will describe the simulation methods in detail. Sec. 3.1 will give an overview of how varying different parameters affects the Lyman-α for-est on different scales. Sec. 3.2 focuses on our hybrid patchy reionization simulations, with the thermal state of the IGM being discussed in 3.2.1, the modulation of the Lyman-α forest on large scales in 3.2.2, the spatially varying pressure smoothing of the IGM in 3.2.3, and its direct imprints on the Lyman-α forest in 3.2.4. We summarize our results in Sec. 4. The Sherwood-Relics simulation suite The Sherwood-Relics suite that we present here builds upon the Sherwood simulation project (Bolton et al. 2017). In particular, we simulate the same volumes and compute mock Lyman-α forest absorption spectra on-the-fly in the same way. We, however, expand the sampled space of astrophysical and cosmological parameters significantly. We explore different reionization and heating histories, different values for the most relevant cosmological parameters for the Lyman-α forest (σ8 and ns), and investigate the impact of the patchiness of cosmic reionization on the high-redshift IGM (z 4). Furthermore, since the sweet spot for Lymanα forest constraints on dark matter also falls in this redshift range, we simulate models with a range of different dark matter free streaming scales. Table 1 provides an overview of the over 200 different simulations performed for this project. On a technical level, the main improvements compared to the Sherwood simulations are a non-equilibrium thermo-chemistry solver and an improved treatment of the ionizing radiation fields. The latter includes both simulations with an improved time-dependent but spatiallyhomogeneous UV background model (as detailed in Sec. 2.3, based on Puchwein et al. 2019), as well as with a new hybrid post-processing radiative transfer/hydrodynamical simulation treatment of patchy reionization (as introduced in Sec. 2.4). Several studies have already made use of the Sherwood-Relics suite. Using our hybrid simulations, Molaro et al. (2022) have derived corrections for the impact of the patchiness of reionization on the Lyman-α forest flux power spectrum. These corrections can be applied to conventional simulations with homogeneous UVB models and will be used together with our large grid of homogeneous UVB simulations in forthcoming studies. Other aspects of our hybrid simulations have also already been explored in Gaikwad et al. (2020), focusing on the properties of Lyman-α transmission spikes, and inŠoltinský et al. (2021), predicting the 21-cm forest. Lamberts et al. (2022) have used our baseline homogeneous UVB simulation as a reference for the expected thermal history during He ii reionization. The main science focus of this work is to provide a comprehensive introduction to the Sherwood-Relics simulations and to investigate the physical processes by which patchy reionization affects the IGM and Lyman-α forest. Table 1. Summary of the Sherwood-Relics simulation suite. The columns contain (i) the box size, (ii) the cube root of the (initial) gas particle number, (iii) the dark matter and (iv) gas mass resolution, (v) the gravitational softening, as well as several properties that have been varied between the different runs. This includes (vi) the dark matter model (cold dark matter or warm dark matter particle mass), (vii) the H i photoheating normalization factor (see Sec. 2.3), (viii) the global reionization redshift zr at which H i reionization completes (defined as the redshift when the volume-averaged neutral fraction in the simulation falls below 10 −3 ), and (ix) the redshift z mid when the volume-averaged neutral fraction reaches 0.5. The last column provides (x) the number of runs Nruns, as well as any further comments. The simulation code All cosmological hydrodynamical simulations that we present in this work were performed with modified versions of the p-gadget3 code, itself an updated and extended version of p-gadget2 1 (Springel 2005). The code follows the gravitational interactions with an efficient, parallel, Tree-PM gravity solver and the hydrodynamics with an energyand entropy-conserving smoothed-particle hydrodynamics (SPH) scheme (Springel & Hernquist 2002). The treatment of radiative cooling assumes a primordial composition of the gas with a hydrogen and helium mass fraction of 76 and 24 per cent respectively. The ionization and thermal state of the gas is then followed with a nonequilibrium solver that integrates the ionization, recombination, cooling and heating rate equations using sub-cycling and adaptive time steps (see Puchwein et al. 2015). The cvode library 2 is used for this purpose. Following the full non-equilibrium equations avoids an artificial delay between 1 https://wwwmpa.mpa-garching.mpg.de/gadget/ 2 https://computing.llnl.gov/projects/sundials photoionization and photoheating that is present in simulations with an equilibrium solver (see also Gaikwad et al. 2019;Kušmić et al. 2022). The following rate coefficients are assumed: the case A recombination rates of Verner & Ferland (1996), the He ii dielectronic recombination rate of Aldrovandi & Pequignot (1973), the collisional excitation rates of Cen (1992), the collisional ionization rates of Voronov (1997), and the free-free Bremsstrahlung rate of Theuns et al. (1998). In most of our runs, photoionization and photoheating is followed based on external, spatially homogeneous models of the UV background (UVB), while in our patchy reionization simulations we interpolate from maps of the radiation field obtained with the aton radiative transfer code. We will discuss this in more detail below. Since we are primarily interested in the low-density IGM, we accelerate our simulations by converting all gas particles that exceed a density of 1000 times the mean cosmic baryon density and have a temperature smaller than 10 5 K to collisionless star particles. While this approach does not yield realistic galaxies, it accurately predicts the properties of the low-density IGM (Viel et al. 2004). Simulations with a homogeneous UVB For our baseline simulations, we use time-varying but spatially homogeneous photoionization and photoheating rates from the fiducial UVB model presented in Puchwein et al. (2019, see their table D1). These were derived in such a way that gas exposed to this UVB follows a (largely) realistic cosmic reionization and heating history. Hydrogen reionization finishes at z ∼ 6, while He ii reionization ends at z ∼ 2.8. In particular, simulations with this model avoid an artificially accelerated reionization (see also Oñorbe et al. 2017). In comparison, in simulations with, e.g., an Haardt & Madau (2012) UVB model, hydrogen reionization would be essentially completed by z ∼ 11 (see Puchwein et al. 2019). In addition to our baseline simulations, we perform simulations with various modifications of the Puchwein et al. (2019) fiducial UVB model in order to sample different reionization histories as well as a wider range of IGM temperatures (see Villasenor et al. 2022 for a similar approach). For example, we produce simulations with a colder or hotter IGM by rescaling the photoheating rates while keeping the photoionization rates fixed. The rescaling factors used for the different runs are provided in Table 1. For the colder (hotter) models, we use the same factor of 0.5 (2) for the H i and He i photoheating rates, but a different factor of 0.66 (1.5) for He ii as the latter primarily affects the thermal history at lower redshifts during the epoch of He ii reionization. In addition, we also vary the global reionization redshift (which we define as the time when the volume-averaged neutral fraction in the simulation falls below 10 −3 ) while keeping the instantaneous gas temperature at z < 5 fixed by performing a linear redshift rescaling of our fiducial UV background model at z > 5. The different homogeneous UVB models considered here complete reionization in the redshift range zr = 5.3 to 7.4, providing a range of models with different amounts of pressure smoothing. These reionization histories corresponds to a rescaling of the fiducial UV background redshift coordinate at z > 5 (i.e. of z − 5) by a factor of 0.89 to 1.24. In addition, to ensure the instantaneous gas temperatures of the models remain the same at z < 5, we multiply the H i photoheating rates by factors 0.9 to 1.5 at z > 5, with the higher factors being used for models with earlier reionization. Hybrid, patchy reionization simulations A time-varying but spatially homogeneous UVB model cannot capture the large spatial fluctuations in the ionizing radiation field that are present during the patchy cosmic reionization process. A homogeneous UVB model can only aim to provide suitable mean values. However, the radiation field will be entirely different in ionized bubbles and neutral regions. In addition to the direct effect this has on the ionization state, this also seeds fluctuations in the IGM temperature and pressure smoothing on large scales. These fade only slowly and persist well into the post-reionization epoch (see, e.g., D'Aloisio et al. 2015;Keating et al. 2018). To capture these effects, while avoiding the enormous computational cost of fully coupled radiation hydrodynamics simulations, we use a new hybrid scheme that combines relatively cheap post-processing radiative transfer simula-tions on a fixed Eulerian grid, with cosmological hydrodynamical simulations that capture the hydrodynamic response to photoheating by an inhomogeneously evolving UV radiation field. Our scheme has some common features with the hybrid technique introduced in Oñorbe et al. (2019). Their scheme uses a semi-numerical, excursion-set method to obtain a map of the local reionization redshift across the simulation volume. For each resolution element in the simulation, they then switch on a time-dependent UVB model at the pre-computed local reionization redshift of the element. The UVB seen by their simulation is otherwise homogeneous, i.e. spatially constant across ionized regions. Our scheme instead uses post-processed radiative transfer simulations to provide an inhomogeneously evolving UV radiation field. This captures the effect of a spatially-varying reionization redshift, but also of spatial fluctuations in the radiation field within ionized regions, which can be significant near the tail end of reionization. Our scheme consists of the following steps: • Performing a cosmological hydrodynamical simulation with a homogeneous UVB model. We use our baseline simulation discussed above for this purpose. To provide sufficient time resolution for the next step, we have saved outputs every 40 Myrs. • Performing a post-processing radiative transfer simulation on the outputs of the cosmological hydrodynamical simulation. We do this on a fixed Eulerian grid with a slightly modified version of the highly efficient, GPUaccelerated aton code (Aubert & Teyssier 2008, 2010. It uses a moment-based radiative transfer scheme that assumes the M1 closure approximation and uses the full physical speed of light. We use a number of grid cells that equals the (initial) number of gas particles in the hydrodynamical simulation (i.e., 2048 3 for the patchy simulations presented later in this work). The challenging task of accurately predicting source luminosities from galaxy formation physics is bypassed by empirically calibrating the amount of ionizing radiation escaping from halos to observational constraints on the ionization state of the IGM, e.g., based on the very high-redshift Lyman-α forest and Thomson scattering optical depth measurements from CMB data. Halos are then populated with ionizing sources based on their mass. Full details of this method are provided in Kulkarni et al. (2019). For our hybrid simulations completing reionization at a redshift of 5.3, we use the redshift evolution of the ionizing emissivity that was derived in that study (see their fig. 1). For our 40 cMpc/h box, we apply a redshift-independent boost factor to the emissivity of 1.265 to account for the smaller box size and higher numerical resolution, which changes the resolved part of the escape fraction. We use a single frequency bin and assume a somewhat smaller (mean) photon energy of 18.63 eV. We apply the same strategy for calibrating the redshift evolution of the emissivity for our earlier reionization scenarios. • The modified aton code version saves maps of the photoionization rate every 40 Myrs. In addition, it produces a map of the local reionization redshift, which for each grid cell we define as the redshift at which an ionized, i.e. H ii, fraction of 3 per cent is exceeded for the first time. This records when a cell starts to get ionized. As cells ionize rapidly once they are reached by an ionization front, the recorded ioniza- tion redshift is rather insensitive to the exact threshold value used. Fig. 1 shows a slice of a local reionization redshift map produced in this way. Typically high-density regions containing many ionizing sources reionize first, while remote voids are the last regions to reionize. • Finally, we perform a second cosmological hydrodynamical simulation of the same volume. In this run our modified version of p-gadget3 loads the maps of the photoionization rate produced by aton and uses them as a spatially varying UV background for following photoionization and photoheating during patchy reionization. At each time step and for each SPH particle, local values of the the photoionization and photoheating rates are computed as follows. If the host cell of the particle has not started to be photoionized in the aton simulation yet (H ii fraction continues to be smaller than 3 per cent), the rates are assumed to be zero so that we can completely skip the integration of the rate equations and assume the gas in this SPH particle is neutral. At redshifts lower than the local reionization redshift, the H i photoionization rate of the host cell is interpolated between the nearest aton output times and adopted for the SPH particle and the current time step. A slightly different treatment is used right after an ionization front has reached a particle, i.e. between the local reionization redshift assigned from the map and the next (lower) aton output redshift. In this case we use the rate of the next (lower redshift) map directly without interpolation. This results in a larger jump in the photoionization rate at the recorded local reionization redshift, corresponding to a quickly passing ionization front. Our tests showed that this leads to a smoother growth of the ionized regions in the hydrodynamical simulation, thereby further reducing residual imprints of the finite number of aton outputs. Since we cannot capture differences between H i and He i reionization with a single frequency bin in the radiative transfer, we simply adopt the H i photoionization rate also for He i. The H i photoheating rate is computed based on the assumed mean photon energy, i.e. the photoionization rate is simply multiplied by 18.63 eV − 13.6 eV = 5.03 eV. The He i photoheating rate (per atom) is then assumed to be 1.3 times that of H i, in rough agreement with the ratio of the two in the homogeneous, synthesis UVB model from Puchwein et al. (2019). Finally, the He ii photoionization and photoheating rates are assumed to be spatially homogeneous and are adopted from the Puchwein et al. (2019) fiducial UVB model. These latter rates play, however, a role only at lower redshifts and are negligible during the epoch of H i reionization, except in the proximity of quasars (e.g. Bolton et al. 2012), which we do not model here. Our hybrid technique could be extended to lower redshifts by using multi-frequency radiative transfer simulations that follow He ii reionization, alternatively a semi-analytic model like in Upton Sanderbeck & Bird (2020) could be used. Here, we refrain from such attempts, focus on high redshifts, and use the local rates derived as described above to integrate the ionization and cooling/heating rate equations in the same manner as in our homogeneous UVB simulations. The results from a patchy, hybrid radiative transfer/cosmological hydrodynamical simulation are shown at redshift z ∼ 7 in Fig. 2. The left-hand panels show the neutral hydrogen density (top) and gas temperature (bottom) in a p-gadget3 simulation with a homogeneous UVB. The middle panels show the post-processed radiative transfer simulation performed with the aton code, while the righthand panels show the results of the hybrid radiative transfer/cosmological hydrodynamical simulation. As expected, the simulation with a homogeneous UVB completely misses the patchy nature of cosmic reionization. The aton simulation nicely displays the complicated morphology of ionized bubbles, but misses some important physics. In particular, running the calculation in post-processing means that the dynamical effect of inhomogeneous photoheating (e.g., pressure-smoothing) and its impact on the gas density distribution cannot be followed. In addition, the thermal energy injected by photoheating is stored for each grid cell, but not properly advected with the gas flow. This can sometimes be seen as dense gas leaving a wake of increased temperature when it falls towards a structure. Also the temperature evolution in the aton simulation only accounts for photoheating and adiabatic evolution, but misses shock heating of gas in dense regions. The patchy, hybrid p-gadget3 cosmological hydrodynamical simulation (right panels) captures all these aspects. The morphology of ionized and neutral regions is almost identical to the aton simulation, but, e.g., the temperature in halos is larger due to the inclusion of shock heating. The dynamical impact of inhomogeneous photoheating in our patchy p-gadget3 simulation will be discussed in detail in Sec. 3.2.3. At first glance our multi-step hybrid radiative transfer/cosmological hydrodynamical simulation scheme may seem complicated, e.g., compared to a single fully coupled radiation-hydrodynamics simulation. There are, however, several distinct advantages. First, the computational cost is much lower compared to a full radiation hydrodynamics calculation (e.g., 0.5 million core hours on CPUs + 3 thousand GPU hours for our zr = 5.3 patchy run compared to 28 mil- Results are shown at z ∼ 7 for simulations that complete reionization at zr = 5.3. By construction, all three simulations have very similar (volume-weighted) H i fractions of ∼ 45% and temperatures at mean density of T 0 ∼ 6900 K. While the hybrid simulation has similar ionized/neutral regions as the aton run, it also accounts for shock heating of gas in high-density regions, consistent pressure smoothing, and the advection of the thermal energy in gas flows. lion core hours on CPUs for the main Thesan run; Kannan et al. 2022). The reason for this is that the radiative transfer is done on a somewhat coarse fixed Eulerian grid (cell size 19.5 h −1 ckpc in our 40 h −1 cMpc patchy simulations), and hence allows for relatively large Courant time steps as well as an efficient parallelization on powerful GPUs. Second, as we empirically calibrate the emission that escapes into the IGM, we can continue to use our strongly simplified galaxy/star formation model (see Sec. 2.2) and avoid all the complications of radiation-hydrodynamically modelling realistic source galaxy populations as well as the escape of ionizing radiation from them. Third, finding a calibration of the source luminosity as a function of halo mass that agrees with observational constraints on the ionization state of the IGM (using only cheap post-processing radiative transfer simulations for this purpose) is much simpler than modifying a full simulation model of galaxy formation in such a way that the same is achieved. Finally, the overhead of producing the first hydrodynamical simulation with a homogeneous UVB came (at least in this project) for free in practice, as we would have needed this run in any case as a baseline model for the comparison to the large number of simulations with different ionization histories and dark matter models (see Sec. 2.1). Of course the computational efficiency of our approach also comes at a price. By saving only a limited number of maps of the radiation field on a fixed grid, we have access to the radiation field only with a limited spatial and time resolution when following photoionization and photoheating in the second hydrodynamical simulation. Also, the coupling of the radiative transfer to the hydrodynamics is not fully self-consistent in cases where the difference in pressure-smoothing between the first and second hydrodynamic simulation significantly changes the opacity of the medium and the local radiation field. We expect these effects to primarily play a role on small scales in dense systems. We would hence not advise to use this method for investigating, e.g., the escape of ionizing radiation from galaxies, and one should probably be careful when studying the details of selfshielding of dense gas. The impact of large scale fluctuations (on the scale of the size of ionized bubbles) on ionization and pressure-smoothing of the low-density IGM should, however, be robustly predicted. To isolate the effects of the patchiness of reionization from the effects of differences in the reionization history, we have performed simulations with homogeneous UVB models that produce the same average reionization and thermal histories as our hybrid, patchy reionization simulations. Details on how a suitably tailored UVB model for such a simulation is obtained are provided in Appendix A. We have performed such pairs of patchy and matched homogeneous simulations for different reionization histories with reionization completing at redshifts zr = 5.3, 5.7, 6.0 and 6.6. We will mostly concentrate on the first model in the analysis as its reionization history seems in best agreement with observations (e.g., Kulkarni et al. 2019;Bosman et al. 2022). Fig. 3 compares this matched homogeneous model to the corresponding patchy simulation. It displays the evolution of the mean and median IGM temperature at mean density, as well as the mass-and volume-weighted ionized hydrogen fraction. As planned, the IGM temperature and ionized fraction in the matched homogeneous run closely follow those in the patchy simulation. We have opted to follow the mean temperature at mean density of the patchy simulation during reionization, and the median temperature at mean density after reionization (see Appendix A for further details on this). Note that, despite the similar neutral fractions, the Lyman-α forest transmission properties will be quite different during reionization. While ionized regions can allow transmission in a patchy reionization scenario, a small amount of homogeneously distributed residual neutral gas that is present even late in the reionization process is sufficient to almost fully absorb the Lyman-α forest in a homogeneous model. This too homogeneous distribution of neutral gas, which can be seen in the upper, left panel of Fig. 2, is the main reason why simulations with a homogeneous UVB cannot reproduce the observed statistics of the Lyman-α forest at very high redshift (z 5.3; e.g., Becker et al. 2015). Patchy reionization simulations, even when done in postprocessing, do a much better job (e.g., Kulkarni et al. 2019;Keating et al. 2020;Bosman et al. 2022). The impact of cosmology, reionization model and IGM temperature on the Lyman-α forest The Lyman-α forest is sensitive to a wide range of cosmological and astrophysical parameters and processes. We aim to sample many of the most relevant ones with the Sherwood-Relics simulation suite. Note that by z = 4.6, even in our patchy reionization simulations, large scale fluctuations in the ionizing radiation field have largely faded. We thus choose to rescale the optical depths in all simulations shown in the figure such that the mean transmission value is consistent with observations at that redshift. We have used the following fitting function for the observed effective optical depth for this purpose (see Molaro et al. 2022), (1) Clearly, the different changes in the simulated physics leave specific imprints in the flux power spectrum. Models with a hotter/colder IGM (with H i and He i photoheating rates boosted/reduced by a factor of 2, see Sec. 2.3 for full details) have significantly less/more power on small spatial scales (large k). This is consistent with the expectation of increased thermal broadening and pressure smoothing at In the hot/cold models, the H i photoheating rate has been increased/decreased by a factor of 2. For reference, the shaded region indicates the relative error in the power spectrum measurement of Boera et al. (2019). Significant degeneracies between different modifications exist. Interestingly, the increase in power on large spatial scales in our patchy simulations seems to be a characteristic signature of inhomogeneous reionization. higher temperature. Pressure smoothing is also increased by an earlier reionization (see the zr = 7.5 model, blue solid curve). Interestingly, the corresponding suppression of power on intermediate scales, 10 −2 s km −1 k 10 −1 s km −1 , is very similar to that in a warm dark matter model with 4 keV particle mass (green solid curve). This already suggests that there will be degeneracies between dark matter constraints and the thermal/reionization history of the IGM (see e.g. Viel et al. 2013;Iršič et al. 2017;Garzilli et al. 2019). One can aim to break these by including measurements at different redshifts, as well as at higher k (e.g. Nasir et al. 2016). Changing the cosmological parameters σ8 and ns has largely the expected effects (see also Viel et al. 2004;Mc-Donald et al. 2005). Similar to the matter power spectrum, it changes the normalization and slope of the Lyman-α forest flux power spectrum. It is worth noting that the simulations with different dark matter particle masses and cosmological parameters shown in Fig. 4 use the same UVB/reionization model. This, hence, isolates the direct impact of these parameters on the IGM from their effects on the ionizing source galaxy population. There would be additional effects, in particular during reionization, when also modelling the impact of cosmology on the ionizing sources and hence reionization history and topology (e.g., Sitwell et al. 2014;Lopez-Honorez et al. 2017;Montero-Camacho & Mao 2021). We (partly) capture these effects by covering a range of reionization redshifts with our sample, so that it can be varied as a separate parameter when doing parameter inference. In this way, we can be agnostic about the details of the impact on the source galaxies. Furthermore, we find that, as already noted elsewhere (Cen et al. 2009;Keating et al. 2018;D'Aloisio et al. 2018;Oñorbe et al. 2019;Wu et al. 2019;Montero-Camacho & Mao 2020;Molaro et al. 2022), the patchy reionization simulations predict distinctive increases in power on the largest spatial scales (smallest k). We will discuss these in detail in Sec. 3.2.2. Our grid of simulations, as detailed in Table 1 and as (partly) shown in Fig. 4, will be used for parameter inference studies in forthcoming work. In the remainder of this study, we will concentrate on our new hybrid simulations and discuss the relic signatures that patchy cosmic reionization leaves in the IGM and Lyman-α forest. . Volume-weighted temperature-density distribution of gas in the patchy simulation that completes reionization at zr = 5.3 (bottom panels) and in the matched homogeneous simulation (top panels) at redshifts 4.2, 5.4 and 7.0. Both simulations have, by construction, similar volume-weighted neutral fractions of ∼0%, 4%, 45% at these redshifts, respectively. During reionization, cold gas that has not yet been ionized is present in the patchy run. In contrast, the same gas is partly ionized and heated in the homogeneous simulation. After reionization, the temperature-density relation in the patchy simulations is still broader at low densities. 3.2 The impact of patchy reionization on the IGM and the Lyman-α forest The thermal state of the IGM during and after patchy cosmic reionization During the era of cosmic reionization, energetic UV photons emitted by first galaxy populations ionize the IGM. The excess energy of these photons beyond the ionization energy of the relevant atoms/ions is available for heating the IGM. Given the patchy nature of cosmic reionization, this causes significant temperature differences between regions that reionize at different times. Fig. 5 shows how this affects the temperature-density distribution of the IGM during and after cosmic reionization. At z = 7, when roughly half of the hydrogen is ionized, there is both cold, neutral and hot, ionized gas present in the zr = 5.3 patchy simulation, corresponding to neutral and ionized regions. The matched homogeneous UVB simulation instead contains only gas that has a temperature of at least several thousand Kelvin. Except for a small amount of shock-heated gas, the gas is partly ionized and partly photoheated (see also Fig. 2) and follows a tight temperature-density relation that is almost flat. This corresponds to all gas having a similar thermal history with little variation around the mean evolution shown in Fig. 3. It also explains the very similar mean and median thermal evolutions in the homogeneous model that are also indicated there. In the patchy simulations instead, there is more variation in temperature of the ionized gas, in particular at low densities where temperatures span a range of 3000 -20000 K, corresponding to different local reionization redshifts and consequently different amounts of cooling after reionization (e.g., Tittley & Meiksin 2007;Trac et al. 2008). Additional broadening of the temperature-density relation is expected from the evaporation of small structures by photo-heating which causes their gas content to cool by adiabatic expansion while driving shocks into small nearby voids that are thereby heated (Hirata 2018). The situation at z = 5.4 is qualitatively similar, just with much less cold, neutral gas remaining in the patchy run. At z = 4.2, well after the end of reionization (at zr = 5.3), the homogeneous and patchy simulations look more similar. The temperature-density relations have steepened somewhat (at least the upper envelope for the patchy run). In ionization equilibrium photoionizations balance recombinations. Recombinations happen more frequently in dense gas which consequently receives more photoheating per particle. In addition, the lowest density gas, i.e. the gas in voids, expands more strongly during cosmic expansion and structure formation resulting in increased adiabatic . Gas temperature in a thin slice through our matched homogeneous and patchy simulations which complete reionization at zr = 5.3. During reionization (redshift 7, both runs ∼ 45% neutral fraction, upper panels) the gas temperature in the simulation with the homogeneous UVB has only small fluctuations which largely trace the gas density, as expected from the narrow density-temperature relation shown in Fig. 5. In contrast the temperature in the patchy simulation differs by orders of magnitude between ionized and neutral regions. Some of the highest temperatures (outside shock heated regions) are found near the ionization fronts in recently ionized gas which had little time to subsequently cool. After reionization (redshift 4.8, lower panels), the temperature in the homogeneous simulation still looks qualitatively similar (except for more pronounced shock heating than at z = 7). In the patchy simulation, all of the gas has been photoheated as well, but large scale temperature fluctuations that are relics of the patchy reionization process are still clearly visible. The highest temperatures (except for shock heated gas) are found in regions that have been reionized late. cooling. In the patchy run, there is nevertheless a larger spread in temperature in very low density gas as the temperature fluctuations seeded by inhomogeneous reionization fade only slowly. This is also shown in Fig. 6, which displays the IGM temperature in a thin slice through part of the simulation box. During reionization, at z = 7, the temperature map of the patchy simulation clearly shows the locations of ionized regions which have been strongly heated, while neutral regions are still cold. The effect discussed above that recently heated low density gas is hotter than gas that has been reion-ized earlier is also visible. It results in the temperatures of gas in ionized regions that is located near the ionization fronts being particularly high. We have already seen from the temperature-density distributions that even at z = 7 all gas has been heated significantly in the homogeneous UVB run. This is reflected by the temperature map in the upper, left panel of Fig. 6 which shows no unheated gas. see this clearly by comparing the upper and lower right panels of Fig. 6. Regions that were still cold and neutral at z = 7 are particularly hot at z = 4.8 as they have been reionized late and had less time to cool after their reionization. Modulation of the Lyman-α forest on large scales These large scale temperature fluctuations also affect the ionization state of the gas. In equilibrium the neutral fraction is proportional to the recombination rate, which depends on temperature and is roughly proportional to T −0.7 . Hence, hot regions have a lower neutral fraction and correspondingly allow more Lyman-α forest transmission. This results in a large scale modulation of the Lyman-α transmitted flux. Fig. 7 illustrates this effect. The bottom panel shows the local reionization redshift along a line-of-sight through the simulation box of the zr = 5.3 patchy run. The middle panel displays the IGM temperature along the same skewer at z = 4.8. Temperatures are shown for both the patchy run and the corresponding matched homogeneous run. The late reionizing region in the middle (x ≈ 6 to 32 cMpc/h) has an increased temperature in the patchy run, as there is little time to cool between its reionization and z = 4.8. The increased temperature in turn results in a lower neutral fraction and hence more transmission in the Lyman-α forest in that region in the patchy run compared to the matched homogeneous run. This is shown in the upper panel of Fig. 7. In early reionizing regions, the opposite effect can be seen, the transmission is slightly lower in the patchy reionization simulation. Note that the optical depths in both simulations have been rescaled such that (when averaged over our full sample of 5000 lines-of-sight through the box) they are consistent with the observed mean transmission value (according to 2. An enhanced large scale (small k) power is found in the patchy simulation in particular near the end of reionization. This corresponds to the large scale modulation of the transmission that is displayed in Fig. 7. For reference, the error bars show resolution-corrected observational constraints from Boera et al. (2019) at z ∼ 4.6. They can be compared to the light red, dashed curve, which again shows the patchy simulation at z = 4.6, but with the mean transmission scaled to the value measured in the observational data, and with a correction for numerical resolution applied. The curve is overall in good agreement with the data, except for the two highest k data points which are most susceptible to differences in the thermal history, as well as to residual metal contamination and resolution effects. The bottom panel shows the ratio of the power spectrum in the patchy to that of the homogeneous simulation. In addition to the large scale enhancement, a reduction of power on small scales is visible in the patchy simulation (see also Molaro et al. 2022). Eq. 1). The large scale fluctuation in the transmitted flux is, hence, driven by the temperature variation and not by fluctuations in the radiation field in our patchy simulations, which have largely faded by z = 4.8. To summarize, Fig. 7 illustrates how a spatially varying reionization redshift translates to a large scale modulation of the Lyman-α forest transmitted flux. Such a modulation is also expected to change the Lyman-α forest transmitted flux power spectrum. Fig. 8 shows that this is indeed the case. Results are indicated for several redshifts after the end of reionization. Clearly visible is an increased power in the patchy simulation on large scales, k ≈ 10 −3 to a few times 10 −3 s km −1 , corresponding to modes with peaks having sizes of λ/2 5 cMpc/h, which are typical sizes of ionized regions (compare to Fig. 2). The power is increased all the way up to the fundamental mode of the 40 cMpc/h box. As expected for an effect caused by large scale temper-ature fluctuations seeded by patchy reionization, the power enhancement fades away at lower redshift. On small spatial scales, k 0.05 s km −1 , there is less power in the patchy simulation compared to the matched homogeneous simulation. This behaviour on large and small spatial scales is consistent with that obtained by Wu et al. (2019) using fully coupled radiation-hydrodynamics simulations that follow the transfer of radiation also with the M1 method. In contrast, and maybe somewhat surprisingly, Mishra & Gnedin (2022) do not find a significant upturn of the flux power spectrum on large spatial scales in their simulations of patchy reionization that use the Optically Thin Variable Eddington Tensor technique for the radiative transfer. The origin of this discrepancy is currently unclear. We find that the behaviour of the power spectrum on small spatial scales (large k) is sensitive to the pressure smoothing of the gas and hence requires a coupling of the radiative transfer to the hydrodynamics to be faithfully followed. The increase of power on large spatial scales (small k) is in contrast also captured by post-processing radiative transfer simulations (e.g., Keating et al. 2018). The details of this increase may depend on the model of the ionizing source population, which is in turn affected both by the assumed astrophysics and cosmology. We will explore this further in future studies, but Hassan et al. (2022) suggest that this may have rather mild impacts on large spatial scales. Employing several of our patchy reionization simulations, the causes of the reduction of the power spectrum on small scales were investigated in detail by Molaro et al. (2022). The main findings were that the spatial fluctuations in the thermal broadening kernel and the different peculiar velocity fields in the patchy simulations cause the reduction in small scale power. For example, transmission spikes will appear first in low density regions that typically reionize late and are hence particularly hot in the patchy model. Consequently, there will be more thermal broadening in these regions, reducing the flux power spectrum on small scales. Note, however, that the reduction of power on small scales discussed above is based on a matched comparison of a patchy and homogeneous run with the same mean ionization and thermal history. When comparing to a homogeneous run with a different and more extended thermal history, like our baseline run with a Puchwein et al. (2019) UVB, there can be more power on small scales in the patchy simulation (compare to Fig. 4). Spatially varying pressure smoothing of the IGM The photoheating of the IGM during reionization strongly increases its temperature and gas pressure. The energy injected per baryon, and hence the temperature increase, depend on the spectrum of the ionizing radiation but are approximately independent of gas density. This results in a fairly flat temperature-density relation for most of the gas shortly after reionization (see Fig. 5), with the exception of regions that have been strongly gravitationally heated by shocks and compression during structure formation. The spatial variation of the gas pressure is, hence, largely dominated by the density variation with dense regions having the highest pressure. Many of the smaller/lower density structures, for which the photoheating dominates over the gravitational heating, e.g., filaments and walls, will consequently 2.7 2.8 2.9 3.0 3. Figure 9. Gas pressure in thin slices through the zr = 5.3 patchy simulation. Results are shown at z ≈ 11.6 (left panel), when the region is still largely neutral (only the upper left corner has already been swept over by the approaching ionization front), and at z ≈ 8.6 (right panel), after the region has been reionized. Both panels show the same structures, although note that different x-coordinate ranges have been used because the whole region is falling (almost exactly) towards the left. The arrows indicate the gas velocity in a reference frame in which the central filament is roughly at rest. The photoheating during reionization strongly boosts the gas pressure. This also increases the absolute difference in pressure between dense structures and voids. After reionization these structures are over-pressurized and start to expand, which is visible in the velocity fields. The gas velocities near the edged of the filament point inward before reionization, but outward after reionization. The arrows are scaled such that a velocity of 10 km/s corresponds to a length of 20 ckpc/h. be over-pressurized and will start to expand after their reionization. Post-processed radiative transfer simulations do not capture this effect as the coupling to the hydrodynamics is missing. With our hybrid scheme, we can instead study the pressure smoothing of the IGM during and after patchy reionization. The expansion of photoheated structures is illustrated in Fig. 9, which shows the gas pressure and gas velocity field of a region in our zr = 5.3 patchy simulation before and after its reionization. These hydrodynamic reactions to the photoheating smooth the gas density distribution on small scales, roughly below the filtering scale (see Gnedin & Hui 1998), resulting in differences that persist well after reionization. Fig. 10 compares the gas density in a thin slice through our zr = 5.3 patchy simulation to that in the corresponding matched homogeneous simulation. This allows an assessment of how patchy reionization causes spatial fluctuations in the amount of small scale structure present in the gas density field. The contours in the upper right panel, which shows the patchy run at z = 7, indicate ionization fronts, i.e. the edges of ionized bubbles. Careful inspection shows that regions near the center of ionized bubbles (such as region A), which reionize early in the patchy simulation, are more strongly smoothed than in the matched homogeneous run where all regions largely follow the mean reionization history. Regions outside ionized bubbles (such as region B) have instead not experienced any pressure smoothing yet in the patchy run. The bottom panels of Fig. 10 show the gas density in the same slice (in comoving coordinates) after reionization, at z = 4.8. The dotted contours in the bottom right panel indicate the same regions as in the upper right panel, hence separating regions that have been reionized early (before z = 7) from regions that have been reionized late (after z = 7). Even at z = 4.8, regions that have been reionized early have a smoother gas distribution than regions that have been reionized late. To illustrate these effects more clearly, we zoom in on regions A and B in Fig. 11. For reference, we also show results for a non-radiative simulation of the same volume. This run does not include any photoionization or photoheating. Thus, no pressure smoothing is present outside shock-heated regions. This provides a reference model for comparison in which pressure smoothing is absent in the IGM. Clearly, the early reionizing region A is most strongly smoothed in the patchy run at both redshifts. At z = 7, the density field in the homogeneous run is still very similar to the non-radiative simulation, while pressure smoothing is clearly visible at z = 4.8. In the patchy run at z = 4.8, shells (visible as ring-like features) around photo-evaporated structures are visible. These will be discussed in more detail in Sec. 3.2.4. As expected, the neutral region B in the patchy simulation is indistinguishable from the non-radiative run at z = 7. However, also the slice through the partly ionized homoge- Compared to the homogeneous simulation, the gas in the central regions of ionized bubbles has experienced more pressure smoothing in the patchy simulation. In contrast, regions that have not been ionized yet in the patchy simulation show more pronounced small scale structure than in the homogeneous run. This is illustrated in more detail for regions A and B for which zoom-ins are shown in Fig. 11. The bottom panels show the gas density in the same slices after reionization at redshift 4.8. The contours in the lower right panel indicate the same regions as in the upper right panel, hence separating regions that have reionized early (before redshift 7) from those that have reionized late. As illustrated by regions A and B, the difference in local reionization redshift results in notably different pressure smoothing even after reionization has ended (also see Fig. 11 for zoom-ins). neous run still looks very similar. At z = 4.8, region B is smoother in the homogeneous run compared to the patchy run. In the latter, the region reionizes late and has hence little time to respond to the heating. In the following, we want to quantify the differences in the pressure smoothing that we have visually identified in the density fields. To this end, we perform local measurements of the power spectrum of the gas density contrast, δ = ∆ − 1 = ρ/ρ − 1, where ρ is the gas density andρ the mean baryon density. For these measurements, we use a 2048 3 grid covering the whole simulation box, and then randomly select 32768 regions of size 64 3 grid cells, corresponding to a region side length of 1.25 cMpc/h. The number of regions was chosen to sample a volume comparable to the full box. In each region, we then measure the gas density contrast power spectrum. We use a window function to reduce the impact of the non-periodic boundary conditions of the individual regions (see Appendix B for full details). We also compute the mean reionization redshift of each region, so that we can bin the power spectrum measurements by . Gas density in units of the mean baryon density in thin slices covering regions that reionize early (region A, top set of panels) and late (region B, bottom set of panels) in our patchy reionization model. Density fields are shown at redshifts 7.0 and 4.8 for the zr = 5.3 patchy simulation (right panels), the matched homogeneous simulation (middle panels), and a non-radiative ("adiabatic") simulation without photoheating and radiative cooling (left panels). The later is included to provide a reference model without pressure smoothing (due to photoheating). The color scale is the same as in Fig. 10. The locations of regions A and B are also indicated there. Clearly early/late reionizing regions exhibit more/less pressure smoothing in the patchy simulation compared to the matched homogeneous model. Figure 12. Impact of patchy reionization on small scale structure in the low-density IGM. Shown are ratios of the gas density (δ) power spectra in the zr = 5.3 patchy and non-radiative ("adiabatic") simulations. Results are shown at z = 4.8 for regions with different mean reionization redshifts (z b in the figure legend denotes the center of the ∆z = 0.5 bins into which the regions are sorted by their mean local reionization redshift). Regions that have been reionized earlier show a larger suppression of small scale power. All regions included here have a mean gas density of 0.2 < ∆ < 0.4 (in units of the mean cosmic baryon density), corresponding to low-density IGM that the Lyman-α forest is sensitive to. The dotted lines are fits to the solid curves assuming the functional form given in Eq. (2). local reionization redshift. We perform this procedure both for the non-radiative simulation, as well as for our zr = 5.3 patchy run. We then compute for each reionization redshift bin the ratio of the mean power spectrum (averaged over all regions in the bin) in the patchy run to that (of the same regions) in the non-radiative ("adiabatic") simulation. This quantifies the reduction of power caused by photoheating. Fig. 12 shows this quantity at z = 4.8. Typically the gas density power spectrum is dominated by dense collapsed structures (e.g., Kulkarni et al. 2015). As we are primarily interested in the low density IGM that is probed by the Lyman-α forest at these redshifts, we opted to include only low-density regions with a mean density 0.2 < ∆ < 0.4 in the computation of the mean power spectra. This is also a density range in which many of the shell/ring features discussed above (and further in Sec. 3.2.4) reside and to which the Lyman-α forest is sensitive to at very high redshifts. A reduction of gas density power compared to the non-radiative simulation is clearly present on small spatial scales (large k). We also find a clear dependence of the amount of suppression on the local reionization redshift of the considered regions. As expected, early reionizing regions show a suppression of power up to larger spatial scales (smaller k), while the regions that reionize latest (the zr = 5.5 bin) have the smallest reduction of small scale structure in the gas density field. Overall the suppression of the power spectrum as a function of k can be well described with a functional form similar to that used in Gnedin & Hui (1998), i.e. with a suppression factor P patchy where P patchy and P ad are the power spectra in the patchy and non-radiative simulation respectively, and kPS describes a pressure-smoothing scale. We will fit this function to the curves in Fig. 12 to extract the corresponding pressure smoothing scales. In these fits, we treat kPS as a free parameter. We will compare the scale measured from the simulation in this way to the Gnedin & Hui (1998) filtering scale (Eq. 4) later in this section. To allow a bit more flexibility in the fits we have included a normalization factor N which helps to absorb some of the effects that are caused by radiative cooling and star formation in dense objects in the patchy run. We expect this factor to be close to unity and indeed we find numerical values in the range 1.006 to 1.075 for the different reionization redshift bins. We perform the fits only in the range where P patchy /P ad > 0.1 to avoid being affected by the oscillations that are present at lower values (higher k) in the case of high reionization redshifts. We interpret these oscillations as an effect similar to that found in Gnedin & Hui (1998) for linear perturbations (see their fig. 1). We also note that while the functional form given by Eq. (2) works well for the suppression of power in low-density regions (∆ ∼ 0.3), it does so less well for regions at higher mean density (∆ ∼ 1 and larger), likely due to a larger number of non-linearly collapsed structures there. The measured pressure smoothing scale kPS can then be converted to a length scale; we do this by defining the pressure smoothing length scale by λPS ≡ 1/kPS. We do not include a 2π factor in this definition to stay consistent with Kulkarni et al. (2015). The λPS values inferred from the fits are shown by the red curve in Fig. 13. On the x-axis, we show the time since reionization, i.e. the time between the mean reionization redshift of the regions in a reionization redshift bin and the redshift, z = 4.8, at which the power spectrum suppression is measured. As expected, the pressure smoothing scale increases with time since reionization as there is more time for the expansion of structures after their photoheating. We next compare the measured pressure smoothing scale to different theoretical estimates such as the Jeans scale, the Gnedin & Hui (1998) filtering scale, and the distance corresponding to a simple free expansion with a fixed starting velocity. The Jeans scale is an instantaneous measure which corresponds to the scale at which the sound crossing time matches the free fall time. Structures below this scale are typically assumed to be suppressed. Here, we define the co-moving Jeans scale similar to equation (2) of Gnedin & Hui (1998), but with an additional factor ∆ to allow evaluation at different densities in units of the mean density. Here cs = (5kT )/(3m) is the sound speed, with T being the median temperature of a region, k the Boltzmann constant andm the mean particle mass.ρm is the mean physical matter density at the considered redshift, and a is the scale factor. The values for the patchy simulation were obtained from the parameters of the fits shown in Fig. 12. For comparison we show the Jeans scale either evaluated at the actual density or at mean density (∆ = 1). Note that the values of the Jeans scales were divided by 3 to better fit on the figure. Furthermore, we indicate the filtering scale computed as in Gnedin & Hui (1998), as well as the length scale that corresponds to a free expansion with a starting velocity of 10 km/s (see main text for details). In linear perturbation theory, this should be evaluated at ∆ = 1, but has a clear meaning only if the temperature evolves like T ∝ a −1 (see Gnedin & Hui 1998). For different values of ∆, this would correspond to matching sound crossing and free fall time at that density, but only in the absence of an expanding background. Here we are interested in low density regions, ∆ ∼ 0.3, which often will expand even in the absence of a thermal pressure. It is thus not entirely clear how well motivated evaluating this at ∆ ∼ 0.3 is. Nevertheless we show results for both, i.e. ∆ set to the mean density of a region and ∆ set to 1. We then average the Jeans length scale, λJ ≡ k −1 J , over all regions falling in a reionization redshift bin. Also, note that we have divided the Jeans length scales by a factor of 3 in Fig. 13, so that the curves fit better onto the plot. Independent of the choice of ∆, we find that the Jeans scale does not adequately reproduce the pressure smoothing scale measured from the simulation. This is not too surprising as an instantaneous measure cannot faithfully capture the time evolution of the pressure smoothing following a heating event. As we are measuring the pressure smoothing shortly after the end of reionization, when there was only limited time for the IGM to hydrodynamically react to the heating, we find that the Jeans scale is much larger than the measured pressure smoothing scale (keep in mind the division by 3). It also decreases, rather than increases, with the time since reionization as recently reionized regions are hot and have a correspondingly large Jeans scale. Next, we compute the Gnedin & Hui (1998) filtering scale, which aims to capture the time evolution properly by taking the thermal history into account. The filtering scale kF is given by their equation (6), i.e. by where D+(t) is the growth function of linear perturbations. As we are considering high redshifts here, z ≥ 4.8, we simplify this by using D+ ∝ a and approximating the Hubble function by H ≈ H0 √ Ωma −3 , where H0 and Ωm are the usual ΛCDM cosmological parameters. Using this, switching to a = a(t ) as the integration variable and carrying out the inner integral, we can write the pressure smoothing scale as (also see Gnedin 2000) We evaluate this for each region using the history of the median gas temperature, convert this to a filtering length scale λF ≡ k −1 F , and then average over all regions within the considered reionization redshift bin. The results of this are shown by the solid gray line in Fig. 13, which is overall in good agreement with the pressure smoothing scale measured from the simulation. This confirms that the Gnedin & Hui (1998) filtering scale describes the pressure smoothing of the low-density IGM well after reionization, and captures its time evolution. Finally we compare the measured pressure smoothing scale to a simple expansion model in which structures expand freely after their photoionization/heating. This length scale in comoving units is given by where vstart is the initial velocity right after reionization at ar and the ar/a term takes care of the cosmological decay of peculiar velocities. This expansion scale is shown for vstart = 10 km/s in Fig. 13. Maybe somewhat surprisingly this simplistic model is in quite good agreement with the simulation. Note that the corresponding smoothing kernel for the gas density field, ∝ exp(−k 2 λ 2 exp ), corresponds to a Gaussian with standard deviation σ = √ 2λexp in position space. Thus, the standard deviation of the real space smoothing kernel roughly grows like the travel distance for a √ 2 × 10 km/s ≈ 14 km/s starting velocity, which is close to the speed of sound in a ∼ 10 4 K ionized IGM. In particular, this simple free expansion model seems to reproduce the measured smoothing well and even better than the filtering scale shortly after reionization, 0.5 Gyr. This suggests that the expansion of photoheated structures may not be strongly hindered by swept up material in the surrounding lower density regions in this time span. The lower level of agreement of the filtering scale may, however, be (partly) related to how the filtering scale is computed here. We use the whole thermal history of a region to compute the filtering scale. Part of a region will already be ionized and heated before the mean reionization redshift of the region is reached. Hence, despite using the median temperature, a region may already have a non-zero filtering scale at its mean reionization redshift, i.e. at a time of zero in Fig. 13. Such an offset could then still have a notable effect at somewhat later times. Furthermore, our measurement of the pressure smoothing in the simulation is based on a comparison of the patchy to the non-radiative run, thereby neglecting the small amount of pressure smoothing that is present in the latter. Also, we do not get the median temperature history directly from 3D grids but by using all pixels of a set of 5000 lines-of-sight through the simulation box that fall into a region. This gives us better time resolution (∆z = 0.1) as the line-of-sight files were saved more frequently than the full snapshots. It introduces, however, some noise which may contribute to such an offset. We have checked that using the line-of-sight file temperatures gives essentially the same result for the Jeans scale as the full temperature field, suggesting that the impact of this procedure on the filtering scale should also be small. Finally, we have neglected that the baryon density perturbation starts out at a much smaller value compared to the dark matter perturbation at the time of the decoupling of the CMB. This should, however, be a very small effect at the low redshifts considered here (Long et al. 2022). Overall both the Gnedin & Hui (1998) filtering scale and the simple expansion model match the measured pressure smoothing in the underdense IGM well, while the Jeans scale is clearly inadequate shortly after reionization. Lyman-α lines due to pressure-smoothed structures In Fig. 11, we have seen how pressure smoothing puffs up photoheated structures. This can result in shell-like features (visible as "rings" in thin 2D slices). Such features are also present in various other SPH and grid-based hydrodynamic simulations (see, e.g., fig. 9 in Kulkarni et al. 2015, the highest resolution panel in fig. 8 of Lukić et al. 2015, figs. 1 and 2 in D'Aloisio et al. 2020, fig. 4 in Park et al. 2021, or fig. 6 in Nasir et al. 2021, but have received only limited attention so far. Here, we investigate to what extent such shells and puffed up gas clouds leave a noticeable imprint in the Lyman-α forest. The top panel of Fig. 14 displays the gas density in a thin slice through part of our zr = 5.3 patchy simulation at z = 4.2. Various shell-like features, often visible as rings in this 2D slice, are present. To investigate their impact on Lyman-α absorption, we shoot several lines-ofsight through the slice (dotted lines, labelled LOS A to E), and calculate synthetic Lyman-α forest spectra for them. For each line of sight, we show a twin panel in Fig. 14, with the lower part showing the normalized transmitted Lymanα flux and the upper part showing the gas density in units of the mean baryon density. We have rescaled the optical depths by a constant factor to make the mean transmitted flux consistent with observed values (using Eq. 1 and computing the rescaling factor based on our full sample of 5000 lines-of-sight). The small dotted lines in the twin panels connect real-space positions of features in the density field to the corresponding redshift space position in the mock spectra. This facilitates identifying the associated absorption features. LOS A passes through a "ring" in the density slice at x ≈ 16.3 cMpc/h. A corresponding double peak is clearly visible in the density profile along the line-of-sight at that location (upper panel of the uppermost twin panel). The two peaks are marked by dotted lines. In redshift space, they correspond to a double-dip absorption feature, which in this case directly reflects the shell traversed by the lineof-sight. Interestingly, the peak/dip separation is larger in redshift than in real space. This indicates that the shell is, as expected, expanding. LOS B passes through a "ring" at x ≈ 18.3 cMpc/h. Again, the double peak in the corresponding line-of-sight density profile is easily identified. In this case, the absorption features of the two peaks are more strongly overlapping, resulting in a broad flat bottom absorption profile in the mock spectrum. LOS C passes through a "ring" at x ≈ 17.8 cMpc/h. While in the previous cases the double peak in the density profile was rather isolated, it falls near other structures here. This results in two absorption dips that fall on a larger scale gradient in the transmitted flux fraction. In an actual observation, this would likely make it more difficult to infer that these two dips originate from a single shell. LOS D passes a less pronounced "ring" at x ≈ 15.7 cMpc/h, and with several other shells and a larger structure nearby. Two corresponding small peaks can still be identified in the line-of-sight density profile, but the corresponding absorption features fall near a larger saturated absorber and are only visible as a slight change in the curvature of the spectrum. This would likely go unnoticed even in a high signal-to-noise observation. LOS E passes through two "rings" at x ≈ 16.1 and 16.4 cMpc/h. Four corresponding peaks are visible in the line-ofsight density profile. Three of these cause visible dips in the synthetic spectrum. In an observation, it would, however, be difficult to identify which dips originate from the same shell, making inferring any pressure smoothing hard. Overall, we find that shells caused by photoheated expanding structures often leave a direct imprint in the Lyman-α forest, with distinct absorption dips at the shell "walls". In the most distinct case of a single, isolated, roughly spherical/cylindrical shell an intriguing double-dip absorption profile is imprinted on the Lyman-α forest spectrum (see LOS A). Measuring, e.g., the separation of the two dips should in principle allow inferring the amount of pressure smoothing that the object has experienced. In practice, it may be difficult to identify which absorption features correspond to a shell originating from the same structure (see, e.g., LOS D and E), as well as what the exact orientation of the line-of-sight with respect to the shell is. An analysis to infer the pressure smoothing based on such features would hence likely require a larger sample of absorption systems along with a tailored statistical technique tested on simulations. It would also be important to check how well different hydrodynamics schemes agree on the prominence and properties of such shells. In addition, their abundance may depended on the amount of preheating of the neutral IGM by X-rays and the relative streaming velocity between baryons and dark matter (see, e.g., fig. 4 of Park et al. 2021). Finally, observations would need to be done at a suitable redshift at which the Lyman-α forest is sensitive to the typical densities of such shells. The redshift considered here, z = 4.2, seems to work reasonably well for this. Figure 14. Expansion of gas overdensities due to the heating provided by reionization and their imprint on Lyman-α forest absorption lines. The top panel displays the gas density in a thin slice in our zr = 5.3 patchy simulation at z = 4.2. Also indicated are five lines-ofsight (LOS A-E) for which we explore the Lyman-α transmission. The five other sets of panels show the density (in units of the mean) and the normalized transmitted flux for these five lines-of-sight. In the top panel various "rings" are visible. These appear when the slice cuts through expanding spherical or cylindrical shells that are produced when overdensities are evaporated by the photoheating provided by reionization. In the density skewers, these "rings" are visible as pairs of density peaks. Some of them are marked by gray dotted lines, which also connect to the corresponding redshift-space positions in the Lyman-α spectra, i.e. the transmitted flux panels. There, isolated "rings" appear as absorbers with two minima (see, e.g., LOS A) or with a flat bottom (LOS B). "Rings" in regions with various other features in the density field can, e.g., appear as small dips in the larger scale features of the transmitted flux (e.g., LOS C-E). Such imprints can in principle be used to constrain pressure smoothing and reionization. SUMMARY AND CONCLUSIONS We have presented the Sherwood-Relics simulations, a new suite of cosmological hydrodynamical simulations aimed at modelling the IGM during and after cosmic reionization. The main difference to our previous Sherwood simulation project, which we build on in this work, is an improved treat-ment of the ionizing UV radiation field and of the thermochemistry of the IGM. Our new simulation sample consists of over 200 runs covering cubic volumes with sidelengths ranging from 5 to 160 cMpc/h. These are populated with between 2 × 512 3 and 2 × 2048 3 particles. Most of the simulations use an updated time-dependent but spatially homogeneous UV background model along with a non-equilibrium thermo-chemistry solver. These runs cover a wide range in thermal evolutions, cosmological parameters, dark matter free streaming scales and reionization histories, and will be instrumental for deriving constraints on these properties from Lyman-α forest observations. The main focus of the analysis presented in this work is, however, the impact of a more realistic patchy cosmic reionization process on the properties of the IGM during the epoch of reionization, as well as its relic signatures that persist for a considerable amount of time in the postreionization IGM, such as spatial fluctuations in the IGM temperature, ionization state and small scale structure. To this end, we have developed a new hybrid radiative transfer/cosmological hydrodynamical simulation technique that allows following an inhomogeneous cosmic reionization process as well as the associated heating and pressure smoothing. The scheme uses radiation fields from post-processing radiative transfer simulations to photoionize and photoheat the IGM in a subsequent cosmological hydrodynamical simulation that then also captures the hydrodynamic response to the heating. This approach is suitable for the IGM, computationally relatively cheap and circumvents the challenges of a full hydrodynamical modelling of the source galaxy population. We assess the impact of the patchiness of reionization by comparing such "patchy" runs to homogeneous UVB simulations with the same mean reionization and thermal history. Our main findings are: • Consistent with previous work, patchy reionization seeds IGM temperature fluctuations on large scales that persist well into the post-reionization epoch, down to z ≈ 4. • These temperature fluctuations are closely related to the local reionization redshift, with late reionizing regions being hotter. • The ionization state of the IGM reflects these temperature fluctuations. This causes a modulation of the Lyman-α forest transmitted flux on large scales and a corresponding increase in the (one-dimensional) Lyman-α forest power spectrum at k 10 −2 s/km. • Patchy reionization also leads to a spatially varying pressure smoothing of the IGM. This results in spatial fluctuations in the amount of small scale density structure that is present in the IGM, with early reionizing regions exhibiting the least amount of such structures. • Following reionization, the pressure smoothing length scale as a function of time since reionization is well described by the Gnedin & Hui (1998) filtering scale in the very lowdensity IGM. A simplistic free expansion model with an appropriate starting velocity also provides a reasonable fit. The instantaneous Jeans scale is instead not suitable for quantifying pressure smoothing shortly after reionization. • Pressure smoothing puffs up or evaporates small IGM structures such as filaments and small halos. This often results in shell-like features in the IGM density field and can leave characteristic imprints in the Lyman-α forest, such as flat-bottom or double-dip absorption profiles. These various impacts of the patchiness of cosmic reionization on the IGM and the high-redshift Lyman-α forest should be taken into account when interpreting precision studies based on Lyman-α forest data, in particular when using measurements at z 4. A relatively simple way of doing this is to extract correction factors from patchy reionization simulations that can then be applied to (grids of) conventional homogeneous UVB simulations (see, e.g., Molaro et al. 2022). Alternatively, given the rather low computational cost of our hybrid patchy reionization simulation technique, tailored simulations can be performed to aid the interpretation of particular datasets. The signatures of patchy reionization seen in our simulations may also be interesting for more accurately constraining the cosmic reionization process, e.g., by measuring the large scale increase of the Lyman-α forest power spectrum as a function of redshift, or by analysing characteristic imprints of photoheated structures on the Lyman-α forest. The latter would likely require the development of a suitable statistical technique to quantitatively compare such features between simulations and observations. The former would rely on more accurate measurements of the Lyman-α forest power spectrum on large scales and near the epoch of reionization. DATA AVAILABILITY The data and analysis code used in this work are available from the authors on request. Further guidance for accessing the publicly available Sherwood-Relics simulation data can be found on the project website: https://www.nottingham. ac.uk/astronomy/sherwood-relics/ APPENDIX A: FINDING A HOMOGENEOUS UVB MODEL THAT YIELDS THE SAME REIONIZATION AND THERMAL HISTORY AS A PATCHY SIMULATION To assess the impact of patchy reionization on various IGM properties and observables, it is useful to have a comparison run with a homogeneous UV background that reproduces the "mean" ionization and thermal history of the patchy simulation. This isolates the impact of the "patchiness" of reionization. As a first step we need to decide what kind of average of the inhomogeneously ionized and heated IGM we want to reproduce. Since we want to have comparable photoheating, we avoid high density regions in which shock heating plays a significant role. We therefore aim to match the IGM temperature at the mean cosmic baryon density in the patchy simulation. However, even gas at a fixed density can exhibit a wide range of temperatures during patchy reionization (see, e.g., Fig. 5). Obvious choices of an "average" temperature are the mean or the median of the IGM temperature at mean density. The median has the advantage that it is less affected by shock heating of a small fraction of the gas to high temperatures. Unfortunately, during reionization, the median is almost a step function, increasing from very low temperatures to ∼ 8000 K once the universe is ∼ 50 per cent ionized (see Fig. 3). Following such a sudden heating would be unreasonable in a simulation with a homogeneous UVB. To combine the advantages of both measures, we elect to follow the mean IGM temperature (at mean density) during reionization, but then switch to the median IGM temperature (at mean density) towards its end. As illustrated in Fig. 3, we switch at the time at which mean and median temperature are identical. Furthermore, we want our comparison run with a homogeneous UVB to have a similar ionized fraction as the patchy simulation. We choose the mean ionized fraction of gas at mean cosmic baryon density in the patchy simulation as our target reionization history. After measuring the target ionization and thermal history (as defined above) from the outputs of the patchy simulation (with a time resolution of ∆z = 0.1), we apply some smoothing to them with a Savitzky-Golay filter to avoid following numerical noise in the evolution. The next step is then to compute photoionization and photoheating rates that reproduce the selected target ionization and thermal histories in a homogeneous simulation. To this end, we use a one-cell code that follows the thermal and ionization evolution of a single gas cell at mean cosmic baryon density. It is a modified version of the one-cell code described in appendix C of Puchwein et al. (2019). At each timestep, the code checks what hydrogen photoionization rate is necessary to continue following the target ionized hydrogen fraction. Similarly, it computes what photoheating rate is necessary to follow the target thermal history. Our p-gadget3 version needs, however, not only the hydrogen rates as input, but also those for He i and He ii. To get these we assume that the hydrogen and He i photoionization rates match, i.e. ΓHeI = ΓHI. For the corresponding photoheating rates, we assume HeI = 1.3 × HI (roughly consistent with the time average of this ratio in the Puchwein et al. (2019) fiducial UVB model). The He ii rates are simply adopted from the Puchwein et al. (2019) fiducial UVB model. The latter have little impact during the hydrogen reionization epoch as significant He ii reionization happens only at lower redshift. The H i, He i, and He ii photoionization and photoheating rates obtained in this way as a function of redshift are then saved to a file, which can then be loaded into our p-gadget3 version as a homogeneous UVB model. Simulations with this model then closely follow the chosen target ionization and thermal history (see Fig. 3). APPENDIX B: LOCAL MEASUREMENTS OF THE GAS DENSITY POWER SPECTRUM In Sec. 3.2.3, we have performed local measurements of the power spectrum of the gas density contrast. To this end, we use the density contrast on a 2048 3 grid covering the full simulation volume and then select 32768 regions of size n 3 reg with nreg = 64 from that grid for the local power spectrum measurement. In contrast to the grid covering the full volume, the individual segments do not have periodic boundary conditions. To suppress the impact of this on the power spectrum measurement, we use a sine window function w(l, m, n) = sin π l nreg sin π m nreg sin π n nreg (B1) where l, m and n are the indices along the x, y and z direction of the cells in the segment that covers a region. They range from 0 to nreg − 1 = 63. For correctly normalizing the power spectrum, we also need to compute the following sum (see, e.g., Heinzel et al. 2002), w 2 (l, m, n). We then calculate the power spectrum of a region by P (| k|) = |δw| 2 n 3 reg S2 where |δw| 2 is an average of |δw| 2 over all k-space points falling in the considered | k|-bin used for the power spectrum computation.δw is the discrete Fourier transform of δw = (δ −δr) × w, where δ = min(ρIGM/ρ baryon − 1, 99) is the ] from full patchy box from full adiabatic box from regions in patchy box from regions in adiabatic box Figure B1. Gas density contrast power spectrum either calculated from a grid covering the full simulation box (solid lines) or by averaging the power spectra measured in all 32768 randomly placed regions (dashed lines). Results are shown for both the zr = 5.3 patchy and the adiabatic simulation at z = 4.8. The averages of the local power spectrum measurements are in excellent agreement with the corresponding globally computed power spectrum. density contrast of the IGM normalized by the mean baryon density. We cap the density at a value of 1 + δ = ∆ ≤ 100 to reduce the impact of dense collapsed objects.δr is the average of δ over the region. Lreg = 1.25 cMpc/h is the sidelength of the region. The discrete Fourier transform is here defined without a normalization factor in the forward transform, i.e. bŷ where l , m and n are the indices of the k-space grid. To test this procedure, we compute the average power spectrum of all 32768 randomly placed regions, i.e. without any cuts on mean density or local reionization redshift, and compare the results of this to the power spectrum calculated from the full grid covering the whole simulation volume. Fig. B1 displays this comparison for the zr = 5.3 patchy and the non-radiative/adiabatic simulation. We find good agreement on all overlapping k scales. Small differences are visible for the smallest k values (largest spatial scales) probed by the grids covering individual regions. This is expected as k-space is poorly sampled by only a handful of modes in the local power spectrum measurements there. On smaller spatial scales, where the pressure smoothing kicks in, the agreement is excellent. This confirms that the local power spectrum measurement works reliably.
2022-07-28T01:15:45.744Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "3aa599433663e6cb13c6388f751b2659b548d63f", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/15938210/stac3761.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "204d78b296adf3af3ddb3d418e9d2e8e3118593d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250109284
pes2o/s2orc
v3-fos-license
Virological Treatment Monitoring for Chronic Hepatitis B More than 250 million people worldwide are currently infected with hepatitis B, despite the effectiveness of vaccination and other preventive measures. In terms of treatment, new therapeutic approaches are rapidly developing, promising to achieve the elimination of infected cells and the complete cure of infection. The on-treatment monitoring of these innovative antiviral treatments will require the implementation of new virological tools. Therefore, new biomarkers are being evaluated besides the traditional virological and serological assays in order to obtain information on different steps of the viral replication cycle and to monitor response to therapy more accurately. The purpose of this work is to describe both standard and innovative tools for chronic hepatitis B treatment monitoring, and to analyse their potential and feasibility. Introduction Hepatitis B virus (HBV) continues to represent a major global health issue, despite a number of effective measures of control [1]. HBV epidemiology has been dramatically changed by the availability of vaccine prophylaxis, the continued efforts to improve treatment, and the growing awareness of disease [2]. WHO has planned a number of interventions aiming to achieve viral hepatitis eradication by 2030 2014 http://apps.who.int/ gb/ebwha/pdf_files/WHA67/A67_R6-en.pdf (accessed on 13 April 2022). Since chronic HBV still affects more than 250 million people worldwide according to recent estimates, however, this goal would seem ambitious [1]. Therapies currently available aim to prevent disease progression, liver cirrhosis, endstage liver disease and hepatocellular carcinoma (HCC) development; however, the matter of the "cure" of chronic HBV infection is a more complex concept [3]. The eradication of the virus in particular remains a challenge because of its peculiar features. In fact, the viral life cycle of HBV is orchestrated by a complex replication apparatus, involving the formation of particularly stable episomal minichromosomes and covalently closed circular DNA (cccDNA) molecules. cccDNA serves as template for transcription and reservoir for replication cycles [4,5]. Furthermore, the viral genome is able to integrate into the host genome, making the infection susceptible of reactivation in certain conditions, even after years of serological and virological suppression. Therefore, while the goal of completely eradicating the virus seems, at the moment, difficult to achieve, the so-called functional cure, represented by the loss of HBsAg (Hepatitis B surface antigen) positivity with anti-HBs (hepatitis B surface antigen antibody) development, is a more realistic end-point and an optimal surrogate. The end-points adequately met by the current therapeutic approaches are long-term suppression of circulating viremia and seroconversion HBeAg/anti-HBe (hepatitis B envelope antigen/antibody) for HBeAg positive patients. Several studies have shown that suppression of HBV-DNA results in biochemical remission and histological improvement, thus decreasing the risk of developing both cirrhosis and HCC [3,[6][7][8]. Current approved treatments of chronic HBV can be broadly classified in immunomodulatory agents (standard and Pegylated interferon-α, PegIFN-α), and antiviral agents (nucleoside and nucleotide analogues-NA) [3]. Immunomodulatory agents are administered in a finite course and can lead to the functional "cure", meaning HBsAg loss. However, this is achieved in a scarce percentage of patients, not exceeding 10%, regardless of HBeAg status [9,10]. Moreover, this treatment choice is limited by low tolerability and high risks of adverse events, with a relatively low acceptance of IFN by physicians and patients as consequence [11]. Nucleoside and nucleotide analogues (NA) inhibit HBV-DNA synthesis via a competitive interaction with the natural substrates of the HBV polymerase, achieving HBV-DNA suppression in the vast majority of compliant patients. Furthermore, currently recommended NAs (tenofovir disoproxil fumarate TDF, Entecavir ETV, and tenofovir alafenamide fumarate TAF) have a high barrier to resistance associated with an excellent safety profile, which has made NAs the mainstay of treatment in most countries. However, the NA's mechanism of action does not avoid the cccDNA formation and therefore HBV replication rebounds after antiviral therapy is discontinued in most patients, requiring indefinite long-term or even life-long therapy. In this setting, the chance to discontinue NA treatment in carefully selected patients represents a crucial point. There are some encouraging experiences available in literature, but the issue remains controversial [12][13][14][15]. In the last few years, there is been an increase in efforts focusing on new curative strategies, broadly based on the two following approaches: either to target different steps of the viral cycle, or trigger a powerful immune response able to overcome the functional immune exhaustion characterizing the chronic status [16,17]. On the basis of these premises, the monitoring of chronic HBV infection (CHB) antiviral treatment using the appropriate parameters and tools represents a crucial aspect of infection management. The correct integration of those tools, some of which have already been used in clinical practice for years while others have been recently introduced, defines the optimal monitoring strategy. Methods We conducted a non-systematic review article with the following electronic sources: PubMed, MEDLINE, Ovid, Scopus, Google Scholar, and Web of Science. The following search-words were used: "HBsAg", "qHBsAg", "qHBcAb", "HBcrAg", "HBV-DNA", "HBV-RNA", and "cccDNA" alone or in combination with "serology", "virology", and "monitoring". We took into account all the manuscripts reporting human-related data (inclusion criteria) excluding articles without the full text available, not in English language, abstracts, book chapters, and articles published before 1990 (exclusion criteria). HBV-DNA The detection of circulating viral genome in serum or plasma, HBV-DNA, represents the core of CHB monitoring for current therapies. It is essential at pre-therapeutic assessment in order to allocate the patients in the proper clinical category, and therefore establish eligibility for treatment [3]. In fact, together with biochemical and histological evaluation, a value of HBV DNA > 2000 IU/mL is the threshold for starting antiviral treatment for chronic patients [3,18,19], according to current international guidelines. Patients with cirrhosis should be treated regardless of the viremia amount [3], according to virological response to treatment is defined as persistently undetectable HBV-DNA (by a sensitive polymerase chain reaction PCR assay) for NA-based treatment, or HBV-DNA < 2000 IU/mL after therapy discontinuation in Interferon-α regimens [3]. To be applicable with sufficient reliability in clinical practice, the tests must have a wide range (up to 7 log 10 IU/mL) and a sensitivity of 5-10 IU/mL [3,18,19]. This accuracy is necessary to properly quantify the pre-treatment viral load in patients with high HBV-DNA levels, to use HBV-DNA undetectability as a reliable marker of viral suppression, and to detect early viremia rebounds. Most of the approved HBV-DNA test arrays meet these requirements. These tests have the advantage to use automated or semi-automated platforms with software-assisted analysis, and therefore do not require extra specialized skills from the operator. However, while their level of sensitivity is considered enough for patients management, a point of interest is represented by their performance in terms of limit of detection LoD (lowest amount of target which can be detected but not quantified as an exact value) and limit of quantification LoQ (smallest amount of target which can be measured and quantified with defined precision and accuracy) [20]. The accurate detection and quantification of minimal and residual viremia has, in fact, proven to be clinically relevant in some circumstances, i.e., the identification of best candidates for therapy discontinuation and early identification of patients with a reactivation. Viral Genotype HBV is differentiated into many genotypes based on genetic divergence. To date, nine genotypes (A-I) of the HBV genome, and numerous sub genotypes have been defined, clustered in different geographical areas of the world. Many evidences suggest that viral genotype affects the natural history of the infection, in terms of HBeAg seroconversion rates, severity of liver disease and emergence of mutants [21] According to these evidences, viral genotype A has been associated with a higher risk of developing chronic infection; HBsAg seroclearance is more likely to occur in genotype A and B patients, compared with genotypes C and D patients [22,23]. About HBeAg seroconversion, patients with genotype C achieve the HBeAg seroconversion later than patients with genotype B, and this results in a disease more fast towards fibrosis, cirrhosis and HCC [24]. A close association between genotype C and HCC has been confirmed by more recent observations [25,26]. In terms of antiviral treatment, viral genotype has been an important variable for Interferon alpha based regimens; indeed, Genotype A is associated with significantly higher rates of both HBeAg and HBsAg loss/seroconversion [27,28], and genotype B also, although a lesser extent, identifies potential good responder to IFN treatment. By contrast, genotype seems to have a weaker role in therapy with NA [29]), although anecdotal experience reported HBsAg loss only in TDF treated patients infected with HBV genotypes A and D, while functional cure was not observed in any patients infected with genotype C [30]. On the basis of the above evidences, according to AASLD guidelines HBV genotyping can be useful in patients being considered for peg-IFN therapy, but it is not otherwise recommended for routine testing or follow-up of patients with CHB [18]. Viral Resistance Tests Viral breakthrough is defined as a 1-log 10 (10-fold) increase in serum HBV DNA from nadir during treatment in a patient who had an initial virological response and requires the search for viral variants selected during therapy. Drug resistance tests are performed with various techniques: the gold standard is the direct sequencing, which allows the detection of all mutations. Other approaches, such as hybridization assays, are easy to perform but can detect only known specific mutations [31]. Since the introduction of NA for CHB treatment, progressive improvements in the efficacy of drugs have profoundly modified the barrier to antiviral resistance [32]. While therapeutic failure and/or viral breakthrough was quite common with the first approved Nas, this occurrence is rare with the current drugs ETV and TDF. In particular, at the present, no typical TDF-resistant mutations have been described [33]. Similarly, if ETV is used as first-line therapy, the rate of resistance is below 1% after 5 years of therapy [34], but is much higher in patients previously treated with "old" drugs. As consequence, the setting of pluritreated patients represents currently the area of use of resistance tests [18]. HBsAg Although viremia is the core of treatment monitoring, it is not directly proportional to the number of infected cells [35]. More importantly, the absence of circulating HBV-DNA does not indicate the elimination of cccDNA from hepatocytes, which is responsible for infection perpetuation [36]. Several studies show that 48-52 weeks of therapy induce a negligible, or very slow reduction of cccDNA [37,38]. Therefore, in the last few years research has focused on the detection of potential surrogate markers that are easy to measure, reliable, and able to get information on intrahepatic viral activity with a simple serum measurement. Among them, HBsAg, the historical hallmark of HBV diagnosis, has acquired a renewed attention. As soon as the quantitative test has become available, the quantitative HBsAg (qHBsAg) has gained a key role in the management of CHB. HBsAg is produced from two sources: translation from transcriptionally active cccDNA, and translation from random viral genes transcribed from integrated HBV-DNA sequences in the host genome [39]. HBsAg is part of the envelope of infectious virions, but it also exists in the form of non-infectious sub-viral spheres and filaments, produced in an amount 100-fold to 100,000-fold higher than mature virions [39]. Several chemiluminescence based immunoassays for qHBsAg are available on easilyhandled semi-automated platforms. Most of them have an analytical sensitivity around 0.05 IU/mL; tests with increased sensitivity (at least one log) are now available, aiming to further improve the accuracy of qHBsAg measurement and the detection of the variants that can be eluded by less sensitive tests [40,41]. The interest in qHBsAg arises from the hypothesis that it can be considered a surrogate marker for cccDNA [37], and also from the fact that the suppression of qHBsAg represents the best therapy outcome to date. In addition, large amounts of circulating HBsAg are considered one of the causes of the immune impairment characterizing CHB; the inverse correlation between HBsAg serum levels and anti-HBV T cell response [42] reinforces even more the significance of qHBsAg from an immunological perspective. Regarding its clinical usefulness, qHBsAg has been firstly introduced as additional tool to identify the so-called inactive CHB carriers along with alanaineaminotransferase activity, HBV-DNA and histological activity [43], since combining more criteria provides better positive and negative predictive values for identification of clinical stage. The cut-off qHBsAg value of 1000 IU/mL has been proposed as the best cut-off [43,44]. Several studies have investigated the predictive value of qHBsAg kinetics in both IFNα and NA based treatment [45]. Regarding PEG-IFNα therapy, the immunomodulatory effect of PEG-IFNα can induce a robust qHBsAg decline and the role of qHBsAg in optimizing the management seems quite clear, especially in the context of HBeAg positive infection. While qHBsAg baseline values do not seem to have any predictive value, regardless of HBeAg status [46,47], levels of HBsAg below 1500 IU/mL at weeks 12 and 24 were associated with higher rates of response to treatment (defined by recommendations from guidelines) [46,48]. Conversely, qHBsAg levels > 20,000 IU/mL at week 12 or 24, or a decrease < 2 log 10 may reliably identify patients not responding to treatment, and therefore rules to discontinue therapy have been implemented based on this data [3]. Similar data has been obtained for HBeAg negative CHB: some studies have identified 0.5 log 10 by week 12 and 1 log 10 IU/mL by week 24 as the threshold with the best predictive value for treatment response and/or HBsAg loss [49,50]. A combination of no decrease in HBsAg levels and <2 log 10 IU/mL reduction in serum HBV DNA levels at 12 weeks of PegIFN-α therapy predicts no response to the regimen [47]. This evidence has been implemented as a PegIFN-α regimen discontinuation rule [3]. On the other hand, as far as NA treatment is concerned, the majority of patients achieve undetectable HBV-DNA relatively soon after NA start, and therefore the qHBsAg changes during treatment could represent a measure infection control. However, the decrease of qHBsAg is substantially less pronounced as compared to that potentially achievable with IFN-based regimens. Despite the optimal virological response, exceeding the 90% in practice studies [51] after several years, mean changes in quantitative HBsAg at week 48 from baseline are minimal, and the decline of qHBsAg per year during Entecavir, TDF, and TAF therapy is very slow [52][53][54]. The necessity of lifelong NA treatment has a number of implications in terms of long-term side effects, economic burden, and different reimbursement policies across countries. Thus, the possibility of stopping NA treatment has received increasing attention in the last years, until it was eventually included in guidelines. The selection of best candidates for NA discontinuation relies on predictive values of markers able to predict the virological and clinical relapse and, among them, qHBsAg levels have demonstrated to be effective [13,55,56]. International guidelines uniformly propose HBsAg quantitation for managing peg-IFN therapy, while recommendations for NA therapy require more data [3,18,19]. Serological Monitoring: HBeAg Data on HBeAg quantification are relatively scarcer, despite the recent introduction of some diagnostic measures from the standardization proposed by WHO in 2013 WHO Expert Committee on Biological Standardization; Collaborative study to establish a World Health Organization international standard for Hepatitis B e antigen. As biomarker during the monitoring of PEGIFN-α therapy, low baseline HBeAg titers were associated with a positive predictive value of HBeAg seroconversion following 48 weeks of therapy. Additionally, a failure of HBeAg titer to decline < 100 PE IU/mL after 24 weeks of therapy was associated with a negative predictive value for HBeAg seroconversion of 96%, a prediction capability stronger than that of serum HBV DNA [57]. In the setting of NA regimens, an early decrease in HBeAg level (at weeks 4 and 12 in patients treated with Entecavir) was predictive of virological response, defined as HBV-DNA undetectability at 48 weeks [58], and lower values of qHBeAg were predictive of HBeAg seroconversion in HBV HIV co-infected patients treated with TDF [59]. Serological Monitoring: HBcrAg HBcrAg is the newest serological marker yet introduced, as an immunoassay for its measurement has been made recently available. HBcrAg consists of the sum of HBcAg, HBeAg, and p22cr, which is a precore protein from amino acid 28 to at least amino acid 150, and this ensemble is produced from coding the precore/core region [60]. Similarly, to the role assigned to qHBsAg, HBcrAg is currently under investigation to define its capability to reflect intrahepatic virological activity, and therefore monitor the effects of treatment on the infection, when HBV-DNA levels are undetectable and thus no longer informative. The interest in HBcrAg arises from the assumption that its quantification might not be influenced by translation from integrated viral sequences. Hence, HBcrAg quantification may represent a more reliable marker of translational viral activity than qHBsAg. In particular, HBcrAg has been proven to correlate with intrahepatic cccDNA [61] at an extent superior to that of qHBsAg and HBV DNA [60]. Levels of HBcrAg are different across the different phases of HBV natural history: firstly, HBeAg positive infection displays higher HBcrAg levels compared to HBeAg negative [62]. Additionally, in HBeAg negative patients where two distinct clinical forms coexist (Chronic Hepatitis and Chronic Infection), HBcrAg can help to distinguish between them. Indeed, these two forms can sometimes overlap, and it can be difficult to discriminate, but at the same time necessary to drive the clinical management [3]; recent works have shown that a single measurement of HBcrAg allows an accurate identification of clinical profile of HBeAg negative patients [63,64]. Furthermore, several data from different cohorts have demonstrated that a HBcrAg is an excellent predictor of HCC development [65,66]. In terms of treatment, HBcrAg has been proven to be a useful tool for monitoring treatment and predicting the response in different clinical contexts. A recent work performed on 222 HBeAg-positive patients treated with PEGIFN-α with or without lamivudine reported a more pronounced HBcrAg decline in patients responding to treatment and identified a cut-off value at week 24 for treatment discontinuation. However, this cut-off did not show a better performance when compared to qHBsAg level, which is the marker currently recommend by guidelines. These data were consistent with previous experiences [67], which reported the value of HBcrAg at week 12 as predictor of response, with the identification of cutoffs at week 12 and week 24 with robust negative predictive values. Similarly, a more pronounced decline of HBcrAg was observed in HBeAg negative patients responding to PEGIFNα-2a treatment, albeit with a weaker prediction capability than HBV-DNA or qHBsAg [68]. When the predictive values of qHBsAg and HBcrAg were compared in HBeAg negative patients treated with PEG-IFN, the best performance was obtained with the combined use of both the antigens [69]. In NA therapy setting, several works report a gradual decline in HBcrAg serum levels in both HBeAg positive and HBeAg negative patients [70][71][72], with a wider magnitude in HBeAg positive patients, as expected. Despite these encouraging results, HBcrAg has not yet been included in the monitoring strategy recommended from guidelines, waiting for more clear evidences about its potential superiority compared to the "traditional" markers [3]. As regards the hot topic of NA discontinuation, a number of key points remain to be clarified, in particular how to optimize the use of qHBsAg and HBcrAg, how to define their different prognostic power, and which variables (i.e., age, sex, ethnicity, viral genotype) could influence the performance of these markers. The possibility to utilize a risk score that combines both HBsAg and HBcrAg has been recently investigated on different studies cohorts, overall showing that lower levels of HBcrAg and HBsAg are associated with favourable outcomes post therapy cessation. Therefore, both markers are helpful in identifying the best candidate to NA suspension [73][74][75][76]. In summary, serological biomarkers are being used progressively more in the classification of CHB phases and in treatment monitoring, in particular in the setting of HBeAg negative infection where the grey zones of classification are frequent, and where the definition of treatment response is more complicated. Even though all these markers are supposed to reflect cccDNA amounts and replicative activity, their relative interchangeability is still matter of debate, as they are part of a complex apparatus that is influenced by several variables. Recently, a post hoc analysis of a randomized clinical trial of PEGIFN ± NA aimed to simultaneously evaluate qHBsAg, HBcrAg and HBV-RNA has found qHBsAg to be superior in predicting HBsAg loss in comparison to the other biomarkers [76]. Further studies on the impact of different variables on serological markers kinetics are warranted in order to define their optimal fields of application. cccDNA As stated above, cccDNA is responsible for HBV persistence, and its eradication is the ideal end point of antiviral therapy, because it would mean achieving the sterilizing cure of infection [68]. Based on the cruciality of its role in the maintenance of infection, several efforts are focusing on compounds able to directly degrade cccDNA, or, more indirectly, to interfere with its formation and function. As consequence, the measurement of cccDNA could provide most useful information on the outcome of infection and its control [5,77], mostly for the innovative therapies which are approaching. Firstly, the cccDNA quantification could provide a safer selection of patients eligible for NA discontinuation based on cccDNA reservoir. Furthermore, it could help define whether anti-HBc-positive and HBsAg-negative patients are protected or susceptible to viral reactivation in conditions where reactivation could occur (liver transplant, immunosuppressive therapies), making the tailoring of antiviral prophylaxis possible. Data on IFN-α treatment showed cccDNA reduction after 48 weeks of treatment, especially with pegylated formulation [78,79], and a reduction below the limit of detection in about half of study population in patients on long term NA [80]. However, surprisingly, a recent study described virological rebound after NA cessation even in patients with undetectable cccDNA, and further data are needed to evaluate the role of cccDNA in NA stop [81]. Despite its crucial role, quantification of cccDNA remains a challenge due to some important limitations: first, this analysis requires a liver tissue sample which can only be obtained with invasive procedures. Second, the lack of standardized testing methods hampers its implementation in laboratory routine [82]. Some studies have reported the presence of cccDNA in the serum, and its role as monitoring marker [83] but this point is still controversial. HBV-RNA Serum HBV-RNA exists in multiple forms. The predominant form is pregenomic RNA (pgRNA), transcribed from cccDNA, which serves as template of both reverse transcription and translation of viral polymerase and core protein. pgRNA is encapsidated into the viral capsid, where it is converted into rcDNA through reverse transcription. Other forms are generated as sub-genomic species [84]. HBV-RNA could represent a biomarker of cccDNA activity. Similarly to other surrogate cccDNA markers, HBV-RNA levels are higher in HBeAg positive infections, and lower in inactive low-viremic infections [85], mirroring HBV-DNA patterns in the different phases of HBV natural history [86]. Of note, the advantage of this marker as monitoring tool is the chance to be measured in serum compartment as pgRNA present in virus-like particles, avoiding the need of invasive procedures to get liver tissue samples. Serum levels of HBV-RNA seem correlated with intrahepatic amounts of pgRNA and cccDNA loads [84,87]. As for PEGIFN-α treatment, baseline HBV-RNA represents a good predictor of virological response, and the kinetics during treatment are a potential element to be included in evaluating the rules for discontinuation of therapy [88]. HBV-RNA testing prior to PEGIFNbased regimens could identify patients with high probability of virological response, and HBV-RNA kinetics may serve to stop treatment in patients infected with HBV genotypes B or C with few chances to achieve response [88]. Furthermore, recent studies demonstrated a progressive reduction of HBV-RNA levels during NA treatment in both HBeAg positive and negative patients [89,90]. Despite the reduction of HBV-RNA during treatment, the majority of HBV-DNA suppressed patients still have detectable HBV-RNA, including a portion of patients with HBsAg loss [86], as consequence of the different effects induced by NA on viral DNA and RNA, which appear to be dissociated: while the DNA production is suppressed, the synthesis of pgRNA is not affected, and it is thus accumulated and released into serum. This is supported by the observation of levels of HBV RNA higher than HBV DNA during treatment [91]. Based on these evidences, similarly to the other cccDNA surrogates, HBV-RNA also has been proposed as predictor of viral relapse and HBV-DNA reactivation after NA discontinuation in both HBeAg-positive and negative patients [92,93]. As is for other surrogates, in HBeAg positive patients HBV pgRNA status predicts the long-term prognoses of patients in terms of HBeAg clearance [94]. HBV-RNA is a really promising candidate for adequate monitoring of intrahepatic viral transcriptional activity, and probably the best tool to measure residual viral activity (which is responsible for virological relapse) in long term HBV-DNA suppressed patients. However, important barriers still limit the implementation of HBV-RNA measurement in routine clinical and laboratory practice, especially technical issues; in fact, although commercial HBV-RNA tests have been recently developed, a standardized, reproducible technique is not available yet. Anti-HBc Quantification Anti-HBc antibodies represent one of the traditional elements of diagnosis and patient classification [18]. They are present in infected individuals, both in chronic carrier state and in subjects who already gained HBsAg negativity. Classically considered an hallmark of previous or ongoing exposure to the virus, the role of their quantification has been recently evaluated [95]. Several studies showed that anti-HBc levels vary across the different clinical phases of CHB and are associated with hepatitis activity [96], and has been proposed as noninvasive biomarker for significant liver inflammation in CHB patients [97,98]. Recent data suggest that the correlation between serum anti-HBc and inflammation exists regardless of level of serum ALT, thus representing a noninvasive clinical biomarker also in the difficult setting of patients with normal ALT [18,99]. Furthermore, anti-HBc declined in Peg-IFN and in NA-treated patients (p < 0.001), with the lowest levels found in long-term responders who cleared HBsAg subsequently [98]. Baseline levels of anti-HBc were predictive of HBeAg seroconversion in both patients treated with PEG-IFNa and Adefovir Dipivoxil [95]. Furthermore, baseline anti-HBc levels were strong predictors of double-negative HBV-DNA and HBV-RNA in patients receiving long-term Entecavir therapy [100]. In the setting of NA discontinuation, high levels of anti-HBc at the end of NA treatment together with low level of HBsAg, were associated with a reduced risk of clinical relapse in a median 2.5-year follow-up observation [101]. It would be advisable to further investigate the functional and quantitative relationship between core antigen and antibodies to establish their use in the CHB virological monitoring. To underline the importance of the immune dynamics of the HBV core protein, several observations have shown that T cell-mediated anti-core response is particularly effective in controlling HBV, since the core protein is the preferential target of the immune response in patients with a more effective infection control [17,102]. Immune Monitoring Considering the critical role of the T cell mediated-immune response against the virus, a monitoring tool able to measure and detail the specific immune response would be ideal for an adequate patient characterization. It is accepted that chronic stage of HBV infection is characterized by a number of T cell dysfunctions, which include up-regulation of co-inhibitory signalling pathways and alterations in metabolic and functional properties [103]. This crucial aspect provides the rationale for the new therapies in development, aiming to gain the control of infection restoring the immune functions, i.e., therapeutic vaccines, tool-like receptor agonists and checkpoint inhibitors [104]. Clinical trials evaluating these new molecules have included the testing of some correlates of immune activity in their design, for example chemokines, cytokines and IFN-stimulated gene (ISG) [105]. The study of fine specificity of virus-specific T cell response, as well as its quantity and strength, is usually realized through cytokine assays. These techniques are based on cytokine (typically IFN-g) measurement after stimulation of T cells with viral sequences, and allow to evaluate a wide protein dimensional spectrum, making this technique also suitable for other larger viruses using relatively few cells. Indeed, this experimental approach has been applied in some immunological sub-studies of more recent immune compounds [106,107]. However, these methodologies are still too sophisticated and a long way from being implemented in a routine lab practice, and remain limited to academic setting and specialized laboratories. Concluding Remarks The landscape of CHB in terms of treatment is rapidly evolving, considering that there are now more than 30 new HBV drugs in the pipeline [16,108]. These new compounds have the challenging objective to become a cure for the chronic stage, an end point scarcely met by any available approach. In parallel, more sophisticated therapies should be associated to a more sophisticated monitoring approach and, indeed, the tools to allocate patients in definite clinical categories and monitor the efficacy of treatment are also rapidly evolving, aiming at serological markers able to provide more information than the classical serology (qualitative HBsAg) and HBV-DNA. These new biomarkers must reflect the intrahepatic control of viral activity and the measure of immune control during treatment. Some of these new markers are already introduced in the clinical practice, given their advantage of standardization, automation and simple use. Some others are more difficult to be introduced, and more efforts have to be spent in order to address this objective. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2022-06-29T15:17:02.919Z
2022-06-24T00:00:00.000
{ "year": 2022, "sha1": "ca2b1a1b4ecf8ed555b467d47398e961dc1cfd13", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/7/1376/pdf?version=1656408133", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9af5979acdc9d97cbe882ce7419e8749d8d1c34c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209906960
pes2o/s2orc
v3-fos-license
Quantum confined Rydberg excitons in reduced dimensions In this paper we propose first steps towards calculating the energy shifts of confined Rydberg excitons in Cu 2 O quantum wells, wires, and dots. The macroscopic size of Rydberg excitons with high quantum numbers n implies that already μm sized lamellar, wire-like, or box-like structures lead to quantum size effects, which depend on the principal Rydberg quantum number n . Such structures can be fabricated using focused ion beam milling of cuprite crystals. Quantum confinement causes an energy shift of the confined object, which is interesting for quantum technology. We find in our calculations that the Rydberg excitons gain a potential energy in the μeV to meV range due to the quantum confinement. This effect is dependent on the Rydberg exciton size and, thus, the principal quantum number n . The calculated energy shifts in the μeV to meV energy range should be experimentally accessible and detectable. Rydberg excitons Rydberg atoms build an important platform for quantum applications because of their long lifetimes and large dipole moments, allowing for the formation of long-range interactions and rendering them highly sensitive for external fields on a quantum level [1][2][3][4][5][6][7]. They feature many interesting properties, such as the generation of nonclassical photonic states [8][9][10], or the dipole blockade [11,12], which can be used for optical switching and other quantum information processing applications [13][14][15][16][17][18]. However, their incorporation into the solid state remains difficult. Rydberg excitons are the solid-state analog of Rydberg atoms. Due to the attractive Coulomb interaction between the conduction electron and the valence hole in a semiconductor, a quasiparticle called exciton can be formed as an excited electronic state of the crystal. This electron-hole pair is weakly bound and extends over many thousands of lattice unit cells, thus its interaction is screened by the static permittivity of the semiconductor (see figure 1(a)). Its energetic levels E n lie within the semiconductor band gap E , ( ) e is the background dielectric constant, and d P the quantum defect for P-excitons. This is in analogy to Rydberg atoms, where the outer electron orbits around the nucleus, following the hydrogen formula = -E n Ry , n 2 / with Ry being the Rydberg constant. Now, electron and hole orbit around each other and are termed Rydberg exciton, when being in quantum states with large principal quantum number  n 10. The total exciton energy, including its center-of-mass kinetic energy, is The center-of-mass kinetic energy of the exciton usually vanishes for direct band gap semiconductors with band gap energies in the visible due to momentum conservation. Therefore, only a series of sharp lines corresponds to an exciton with vanishing momentum [19]. The crystal ground state is a state with no excited electron-hole pairs; it is totally symmetric with no angular momentum and positive parity. Excitons are decisive for semiconductor optical properties [36]. The electron-hole relative motion is on a large scale compared to interatomic distances. Therefore, in contrast to Rydberg atoms, which are described by a modified electron wave function only, the exciton wave function F exc is given by the electron wavefunction f e times the hole wavefunction f h times an envelope wave function f : The envelope wave function describes the electron-hole relative motion, or the angular momentum quantum state l, and can be expressed in spherical harmonics as The exciton wave function obeys the two-particle Schrödinger equation (also known as Wannier equation) with the reduced exciton mass m r [37]. Quantum confinement Rydberg excitons in cuprous oxide quantum wells have not yet been studied, although the concept of the related problem is well known. The problem can be described via potential well calculations. These have already been extensively performed for electrons and holes in semiconductors [38]. Mesoscopic spatial confinements of the order of several μm, also known from quantum dots [39,40], are still large in comparison with the lattice constant of the material. The band structure of a semiconductor is therefore only weakly changed when being spatially confined compared to the bulk material [41]. This assumption allows investigating solely changes in the envelope part of the wave function caused by the confinement potential and is called envelope function approximation. However, in spatially confined structures, surface polarization effects may play a role due to the different dielectric constants of the material and its surrounding. This becomes in particular important when investigating Rydberg excitons, which are quasiparticles bound by the threedimensional Coulomb interaction. Quantum confinement effects arise as soon as the spatial confinement is comparable to the quantum object's Bohr radius. Rydberg excitons have Bohr radii up to μm size. This allows to confine them in so-called mesoscopic structures, the dimension of which is large compared to the lattice constant but comparable to the exciton Bohr radius (see figures 1(b), (c)). We want to confine a whole Rydberg exciton into a quantum well, and expect large energy shifts of this giant quantum object. In excitons the Coulombic electron-hole attraction gives rise to bound states of the relative motion of the exciton. The excitonic bound levels in quantum wells are, in many respects, analogous to the Coulombic impurity bound states, meaning that the electron and hole relative motion is described by a Hamiltonian which is similar to that of an impurity [42]. We intuitively associate an extra kinetic energy with the localization of a particle in a finite region of space, i.e. the impurity binding energy increases when the quantum well thickness decreases. Quantum confined Rydberg excitons In mesoscopic cuprite slabs surrounded by air or vacuum, the potential barrier accounts for 2.98 eV, which is given by subtracting the Rydberg binding energy (2.17 eV) from the work function energy of electrons in cuprite to air (5.15 eV). This finite potential well energy can, however, be treated as an infinite potential barrier due to the fact that the linear dimension of the confinement exceeds the lattice constant of the semiconductor [41]. In the following, we will focus on the weak confinement regime, where the confinement acts only on the center-ofmass motion of the exciton, i.e. the envelope of the Bloch functions, and does not interfere with the relative motion of the electron-hole pair. Here, the Rydberg exciton binding energies are larger than the confinement effects. In contrast, in the strong confinement limit, the picture of an exciton would be destroyed, as one would then treat electron and hole separately with their individual motions being quantized. In this case, the confinement energy dominates over the Rydberg exciton binding energy. Methods In order to calculate the energy shifts a Rydberg exciton experiences when being confined in a quantum well, we perform potential well calculations for the center-of-mass coordinate of a Rydberg exciton in a cuprous oxide quantum well (see figure 1(d)). The quantum well consists of a cuprite slab that is extended many micrometers in x and y direction but confined to a few hundreds of nanometers only in z direction. It is surrounded by air or vacuum. Usually, this quantum mechanical problem is not separable due to the confined geometry. In particular, the Coulomb interaction, causing the electron and hole relative motion, is always of three-dimensional nature and depends on the relative electron-hole distance. However, for large quantum wells (weak confinement regime) the confinement or perturbation acts only on the center-of-mass coordinate and is assumed to not disturb the relative motion. Then a separation into relative and center-of-mass coordinates is possible as an approximation. Electron states for infinite potential barriers When a quantum object, such as an electron, is spatially confined in one dimension, it gains potential energy V z , ( ) which can be expressed in the Schrödinger equation as: The electron wave function can be separated into ) ( ) which allows for solving the Schrödinger equation via the separation ansatz: which is comparable to equation (2) for the exciton, while equation (9) determines the particle's quantized energy eigenvalues due to the quantum confinement with j being the quantum state index in the quantum well and L z the well width. The quantized bound state energies increase with decreasing quantum well width and are proportional to the quantum state index j . 2 Weakly confined Rydberg excitons in cuprite quantum wells Following the calculation scheme from the previous section, we calculate the Rydberg exciton energies in a cuprite quantum well with quasi-infinite potential well barrier The Hamiltonian describing this problem is given by This is similar to equation (1) but with an additional quantized energy term p D =  j m L 2 conf r z 2 2 2 2 ( ) due to quantum confinement in one dimension, with j being the quantum state index and L z being the quantum well width. For both, the lowest ( = j 1) and third excited ( = j 3) quantum state index, and for all different well widths L z shown here, the energy shift D conf increases with increasing principal quantum number n. This increase in energy is absolutely larger and steeper the smaller the well widths. For the lowest quantum state index = j 1, D conf accounts for up to 14 μeV, while for the third excited quantum state index = j 3 D conf reaches values of up to 140 μeV, which is one order of magnitude larger. Strongly confined Rydberg excitons in cuprite quantum wells Subject of this paper is the weak quantum confinement of Rydberg excitons in cuprite quantum wells. The relevant structures are of several hundreds of nanometers to a few micrometers in size, which can be procuded straight-forwardly using focused ion beams. Furthermore, we want to investigate a single, giant quantum object-the Rydberg exciton. In the strong confinement regime, the confinement energy would exceed the Coulomb energy and thus the binding energy of the exciton. In this case, electron and hole are confined separately in their respective confinement potentials. We believe that the following considerations regarding the strong confinement regime might be of interest for the reader. 3.3.1. Exciton binding energy in the strict 2D limit. The 3D Rydberg P-exciton energy is given by with E : g band gap energy, Ry : * Rydberg constant for excitons in cuprous oxide, n: principal quantum number, and d = 0.23: P quantum defect for P-excitons [43]. In strictly two dimensions, the Rydberg exciton binding energy E b becomes modified into 44]. This implies that the lowest 2D exciton energy ( = n 1) has a magnitude four times larger than the 3D exciton ground state, when neglecting the quantum defect: Thus, the exciton ground state is farther away from the bandgap in 2D and the 2D Bohr is half as big as the 3D value. The excitonic resonances as well as the exciton binding energies are stronger in 2D. Interestingly, the absorption strength decreases in this situation more rapidly with n. The transition from 3D to 2D would cause exciton energy shifts of even a few meV: The exciton binding energy in three dimensions decreases rapidly with n. So does the binding energy in two dimensions as well, however, at some larger values. The difference of both, defined as the energy shift D , 2D thus, follows the same trend. Note that a larger Rydberg binding energy implies a smaller Rydberg exciton energy as the binding energy ( meṼ ) is subtracted from the band gap energy (~eV) in order to yield the exciton energy = -E E E . n g b Therefore, the effect the 2D confinement will have on the total exciton energy will be a red-shift toward lower energies. 3.3.2. Influence of the permittivity of the surrounding material outside the quantum well. In the case of a very narrow quantum well, the permittivity inside and outside the well is different. The Coulomb interaction between electron and hole in an exciton is of three-dimensional character and, thus, not squeezed inside the well, but occurs primarily outside the well with less effective screening [45]. Discussion The quantum confinement inhibits free motion and, thus, influences the kinetic energy of the quantum object. Only discrete values are allowed, leading to a series of quantized states [46]. Confined to a quantum well, Rydberg excitons gain energy, so they experience an energy blue-shift. Within the weak confinement regime, this energy shift accounts for a few and a few tens of μeV for the lowest and third excited quantum state index, respectively. The energy shifts are controllable via the three parameters principal quantum number n, quantum state index in the quantum well j, and quantum well width L z over a wide range. Such controllable energy shift could be used for realizing quantum technologies using Rydberg excitons in cuprous oxide [2]. In order to enlarge the range over which the energy can be shifted, one could go to higher confinements, meaning a more tight confinement along one dimension (intermediate or strong confinement regime in 2D). The conditions for the strong confinement regime become, as shortly discussed in section 3.3, more complicated. Strictly speaking, for a very strong exciton confinement, electron and hole become quantized separately, so the quantization energy dominates over the Coulomb interaction energy and we cannot speak of an exciton any more. In order to get a feeling for how the energy shifts would develop when going towards the intermediate regime ( » L r 2 z ), we applied our weak confinement model to excitons in this intermediate confinement regime. The resulting energy blue-shifts D conf are depicted in figure 4 for the lowest quantum state index = j 1, for different well widths L , z and in dependence on the principal quantum number n. The transition from the weak to the intermediate regime is smooth. As expected, the energy blue-shifts become significantly larger the narrower the quantum wells, and account for up to several tens of meV. In the strong confinement regime, the electron and hole confinement energies would exceed the exciton binding energy = Ry 96meV. * Outlook Higher quantum confinements can also be reached by confining a quantum object in two or three dimensions. This can be realized by confining Rydberg excitons into cuprite quantum wires or quantum dots (see figure 1(b)). In such structures the energy shift caused by the confinement geometry reads: The blue-shift could, thus, be enhanced by a factor of 2 and 3 compared to the quantum well structures. It remains, however, unknown how the Rydberg binding energy would change in such structures. Recent advances in focused ion beam milling using Au + ions (Raith IonLine) make it possible to fabricate tailored quantum wells with widths in the hundreds of nm range, as shown in figure 5. Therefore one should in principle be able to weakly confine experimentally Rydberg excitons into quantum wells. The difficulty remains in that the well interface needs to be very flat, to not disturb the crystal lattice, its symmetry, and thus the exciton formation. The energy shifts could be measured in absorption or by pump-probe spectroscopy. This gives for the first time the opportunity to probe such large quantum objects inside the confined geometry of quantum wells.
2019-11-14T17:07:36.746Z
2019-12-18T00:00:00.000
{ "year": 2019, "sha1": "62ad722296a34761d0cd4f213329e59ddca80ff3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1361-6455/ab56a9", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "8bc6f436ae40d68b29013a516205b080f63edff5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18838640
pes2o/s2orc
v3-fos-license
Serial Passage of Virus myxomatosum through Cottontail Rabbits In a previous paper (1) it was suggested, on the basis of immunological similarities between the viruses of infectious fibroma and infectious myxoma, that passage of myxoma virus through cottontail rabbits (genus Sylvilagus) might yield fibroma virus just as passage of variola virus through calves supposedly yields vaccinia virus. The use of cottontail rabbits to effect this hypothetical transformation was suggested by the fact that the fibroma virus was originally obtained from a naturally occurring growth in one of these animals (2). The susceptibility of the cottontail rabbit to infectious myxoma is not established to judge from the literature on the subject. Moses (3) has stated that the wild rabbits of Brazil are insusceptible to experimental infection with Virus myxomatosum except in rare instances and Hobbs (4) and Hyde and Gardner (5) were unable to infect our native cottontail rabbits with it. The writer, in 3 attempts to infect cottontail rabbits by subcutaneous administration of Virus myxomatosum, obtained 1 doubtful infection. In this rabbit a transitory thickening of the epidermis and subcutaneous tissue developed at the site of injection 16 days after inoculation (1). I t seemed likely that if any hope of establishing Virus myxomatosum in cottontail rabbits was to be entertained, a route of inoculation other than subcutaneous should be employed. of immunological similarities between the viruses of infectious fibroma and infe tious myxoma, that passage of myxoma virus through cottontail rabbits (genus Sylvilagus) might yield fibroma virus just as passage of variola virus through calves supposedly yields vaccinia virus.The use of cottontail rabbits to effect this hypothetical transformation was suggested by the fact that the fibroma virus was originally obtained from a naturally occurring growth in one of these animals (2). The susceptibility of the cottontail rabbit to infectious myxoma is not established to judge from the literature on the subject.Moses (3) has stated that the wild rabbits of Brazil are insusceptible to experimental infect on with Virus myxomatosum except in rare instances and Hobbs (4) and Hyde and Gardner (5) were unable to infect our native cottontail rabbits with it.The writer, in 3 attempts to infect cottontail rabbits by subcutaneous administration of Virus myxomatosum, obtained 1 doubtful infection.In this rabbit a transi- tory thickening of the epidermis and subcutaneous tissue developed at the site of injection 16 days after inoculation (1).It seemed likely that if any hope of establishing Virus myxomatosum in cottontail rabbits was to be entertained, a route of inoculation other than subcutaneous should be employed. Attempted Infection by the Intracerebral Route A cottontail rabbit was inoculated intracerebrally with 0.1 cc. of a dilute suspension of testicular myxonm virus.The animal exhibited no signs of illness 33 and was sacrifi d on the 9th day.The brain, which showed no mac oscopic lesions, was removed and used in preparing an approximately 10 per cent suspension.A cottontail rabbit was inoculated intracerebrally with 0.1 cc. of t is suspension and in addition was injected subcutaneously with 2 cc. and intraperitoneally with 8 cc. of the suspension.A domestic rabbit was inoculated subcutaneously with 1 cc.and intratesticularly with 0.5 cc. of the suspension.The domestic rabbit died of characteristic myxoma on the llth day, while the cottontail rabbit developed no illness and was sacrificed on the 9th day.A 10 per cent suspension of its brain was prepared and injected into a cottontail rabbit and a domestic rabbit as in the previous experiment.No evidence of myxoma appeared in the domestic rabbit and consequently no further cerebral serial passages through cottontail rabbits were attempted. F r o m this experiment it was apparent t h a t Virus myxomatosum survived for 9 days in the brain of a cottontail rabbit and was then transmissible to a laboratory rabbit, but it probably did not increase in amount since the brain f even the second serial passage cottontail rabbit failed to infec a domestic rabbit.This route of inoculation was obviously unsatisfactory in any a t t e m p t to modify the virus b y prolonged serial passage. Infection of Cottontail Rabbits by Intratesticular Inoculation Because of the facility with which the fibroma virus infects domestic rabbits when inoculated intratesticulafly, it was decided to try this route of inoculation in infec ng cottontail rabbits with Virus myxomatosum.It was found that egular and satisfactory infections could be obtained b y testicular inoculation supplemented by simultaneous subcutaneous inoculation.In all, fifteen cottontail rabbits have been infected in this manner and two b y subcutaneous inoculation alone.Most of the cottontail rabbits used in these experiments were purchased in Kansas but a few trapped in the neighborhood of the laboratory were also used.No naturally immune animals were encountered. Course of the Disease.--Theclinical picture of the disease induced in cottontail rabbits by Virus myxomatosum proved to be very different from that seen in domestic rabbits.The incubation period was long, varying from 6 to 12 days.The dis ase was an entirely local process.The first evidence of infection in all instances was a slight swelling of the inoculated testicle.Usually the subcutaneous tissue at the site of injection remained negative although rarely a small firm tumor developed.The inoculated testicle, after swelling had begun, often increased rapidly in size and became very firm, the swelling occasionally being accompanied by edema of the scrotum.The animals, however, showed no evidence of generalized illness and in special no myxomatous swellings of the eyelids, nose, ears, or anus.None died, but 10 were sacrificed from 11 to 19 days after inoculation.The remaining 7 made uneventful recoveries, and it is believed that all 17 would have survived.The inoculated testicle frequently reached a size 2, and sometimes even 3 times that of the uninoculated testicle.This enlargement persisted for an indefinite period but in most instances retrogression had begun within 25 days following inoculation.Late in the course of the infection, when the scrotal edema had subsided, the inoculated testicle was frequently irregularly nodular. Patkology.--Thepathological picture in cottontail rabbits autopsied 11 to 19 days following infection was quite constant.Usually no lesion was present at the site of subcutaneous inoculation, though rarely a small tumor was encountered; firm, pinkish white, edematous and giving, on cut section, the impression of a fibroma.The inoculated testicle, in addition to being enlarged, was injected and varied in color from a pale pink to a deep purplish red.On cut section it was firm and moist and frequently white and fibromatous in appearance even though the surface of the testicle had appeared injected.The epidldymis, sometimes relatively more enlarged than the testicle, was usually white or pinkish white in color, nodular, and cut as though fibrous.The scrotum, when involved, was thickened and its walls were diffusely infiltrated with a gelatinous exudate. Only one subcutaneous tumor has been examined histologically.It had begun to retrogress at the time the animal bearing it was autopsied.The overlying epithelium was normal in appearance, and no cytoplasmic inclusions were observed.The main mass of the tumo had been composed of widely spaced large stellate connective tissue cells but these, at the time of examination, were degenerating and stained but faintly pink with phioxinomethylene blue.Pink-staining collagen fibrils, coagulated lymph, and many round cells filled the spaces between the degenerating connective tissue cells. Four myxomatous cottontail rabbit testicles have been examined histologically.All presented similar pictures.There was a marked proliferation of connective tissue cells in the interstitium, and mitotic figures in some sections were plentiful.The arrangement of the cells varied; in some it was so loose and the individual cells so large and isolated that the appearance was that of a myxomatous infiltration.In other sections the cells were definitely of the young connective tissue type and formed compact whorls about the semlniferous tubules.In some portions of all sections necrotic seminiferons tubules were seen.This necrosis was probably secondary to pressure exerted by the rapidly proliferating interstitial tissue.Nests of round cells were present in all sections and, in some, large areas of the interstitium were densely infiltrated with this type of cell.No cytoplasmic inclusions were observed in epithelial cells in either the testicle or epididymis. 5~ The results of the experiment were consistent in that all serum samples from myxoma-recovered cottontail rabbits possessed some neutralizing properties for Virus myxomatosum.The 2 control rabbits died in 10 and 11 days.3 of the rabbits receiving mixtures containing convalescent serum died of characteristic infectious myxoma in 17, 19, and 34 days respectively.1 rabbit, after an incubation period of 17 days, developed what appeared to be a mild myxoma and was found subsequently to have become immunized to Virus myxomatosum.The remaining 2 rabbits showed no evidence of illness and 1 of these tested later was found to be fully susceptible.3 of the sera in the amounts used thus afforded some protection against Virus myxomatosum but failed to prevent fatal infection, 1 protected sufficiently well to prevent death while 2 protected comple ly. Three of these sera were tested further for th ir ability to neutralize the virus of infectious ibroma by using 3 parts of serum to 1 part of 5 per cent testicular fibroma virus suspension.Al 3 neutralized fibroma virus completely when the mixtures were tested by subcutaneous inoculation into domestic rabbits.The cottontail rabbits furnishing the 3 serum samples were inoculated subcutaneously and intratesticularly with fibroma virus, of proven infectivity by both routes for a control cottontail rabbit, and were found to be completely resistant to infection. Immunological Relationship of Infectious Myxoma of Domestic Rabbits to Infectious Fibroma.--In an earlier paper (1) it was recorded that a single domestic rabbit upon recovery from an attack of myxoma induced by infection with an almost neutral serum-virus mixture was not only resistant to infection with the virus of infectious fibroma but also yielded a serum whic neutralized both the fibroma and myxoma viruses.The exact proportions of neutralizing serum and virus necessary to produce non-fatal myxoma infections in domestic rabbits are difficult to ascertain.Most of the mixtures tried are found to contain either too much or too little serum, in which cases, respectively, the injected animal either acquires no illness and no immunity or develops myxoma and succumbs.However, out of a number of attempts, 3 other domestic rabbits have been given non-fatal attacks of myxoma by inoculation with almost neutral serum-virus mixtures.These 3 animals were found immune to fibroma ~ and their sera capable of neutralizing both the fibroma and myxoma viruses.These experiments indicate that domestic rabbits, as well as cottontail rabbits, not only become resistant to fibroma virus following infection with Virus myxoma osum but also develop antibodies capable of neutralizing fibroma virus. DISCUSSION AND SUMMARY In the experiments presented, Virus myxomatosum was observed to produce only a localized fibromatous or myxomatous orchitis when injected into the testicles of cottontail rabbits.This type of disease was quite unlike the acute fatal illness which the virus caused in domestic rabbits.10 serial passages of Virus myxomatosum through cottontail rabbits, covering a total elapsed time of 140 days, failed to ~lter its pathogenicity for domestic rabbits.Although it proved i ossible to convert the myxoma virus into fibroma virus by se ial passage in cottontail rabbits, it was found that these animals, recovered from myxoma, had a solid resistance to infection with the fibroma virus.Furthermore, their sera possessed neutralizing antibodies effective against the fibroma virus as well as Virus myxomatosum.A similar cross-immunological relationship was observed in the cases of domestic rabbits that had survived an attack of infectious myxoma.BIBLIOGRAPHY 1. Shope, R. E., J. Exp. Med., 1932, 56, 803. 2. Shope, R. E., J. Exp. Med., 1932, 56, 793. 3. Moses, A., Men,. Inst. Oswaldo Cruz, 1911, 3, 46. 4. Hobbs, J. R., Science, 1931, "/3, 94. 5. Hyde, R. R., and Gardner, R. E., Am. f. ttyg., 1933, 1"/, 446. 1 Sir Charles Martin has kindly allowed me to refer here to his own unpublished experiments of a similar nature.He found that 5 rabbits that had survived infection induced either by contact or by conjuncfival inoculation with a strain of Virus myxomatosum which varies in virulence from time to time were resistant to fibroma virus administered intradermally.All showed an allergic reaction 24 to 36 hours after inoculation with fibroma virus but the superficial hyperemia and swelling disappeared by the 3rd day and no fibromas developed.5 to 8 months intervened be een the recovery of the e rabbits from myxomatosis and the test inoculation with fibroma virus. Serial Passage of Virus myxomatosum through Cottontail RabbitsT h e 17 animals furnishing the basis for the foregoing description of Virus myxomatosum infection in cottontail rabbits were part of an experiment in which an a t t e m p t was m a d e to determine whether the virus would be modified b y serial passage in this species.Virus rnyxomatosum has been submitted to 10 serial cottontail rabbit passages over a period of 140 days.The inoculated testicle was used as a source of virus for each succeeding serial passage except the third when tissue from the subcutaneous lesion was utilized.The virus was tested at each passage by inoculation into domestic rabbits to detect whatever attenuating influence cottontail rabbit passage might exert upon it.In both the cottontail and the domestic rabbit infections only one testicle was inoculated.A record of the passage experiment is outlined in TableI.Consideration of the data presented in TableIindicates that passage of Virus myxomatosum serially through cottontail rabbits did not attenuate it for domestic rabbits.Nothing to suggest conversion of Virus myxomatosum into the virus of infectious fibrorna was revealed b y the procedure.In the experiments recorded in Table I animals to be used as a source of virus were sacrificed on from the 11th to the 19th day following inoculation.F r o m other experiments not recorded in this table, it is known that Virus myxomatosum persists in the infected testicles of cottontail rabbits and remains fully virulent for domestic rabbits for at least 21 days.In one instance it could not be demonstrated by animal inoculation after 32 days.Immunological Relationship of Infectious Myxoma of Cottontail Rabbits to Infectious Fibroma.--The sera from 6 cottontail rabbits recovered from infection with
2018-05-31T11:06:49.966Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "30a1b6af2d137d4e743d512058df272fb21026b6", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "30a1b6af2d137d4e743d512058df272fb21026b6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
222109723
pes2o/s2orc
v3-fos-license
Evolution of quasi-bound states in the circular n–p junction of bilayer graphene under magnetic field Electron in gapless bilayer graphene can form quasi-bound states when a circular symmetric potential is created in bilayer graphene. These quasi-bound states can be adjusted by tuning the radius and strength of the potential barrier. We investigate the evolution of quasi-bound states spectra in the circular n–p junction of bilayer graphene under the magnetic field numerically. The energy levels of opposite angular momentum split and the splitting increases with the magnetic field. Moreover, weak magnetic fields can slightly shift the energy levels of quasi-bound states. While strong magnetic fields induce additional resonances in the local density states, which originates from Landau levels. We demonstrate that these numerical results are consistent with the semiclassical analysis based on Wentzel–Kramers–Brillouin approximation. Our results can be verified experimentally via scanning tunneling microscopy measurements. Model and quasi-bound states spectrum In this section, we consider the scattering of a plane wave electron on a CNPJ in BLG and calculate the local density of states (LDOS) of QBSs based on the two-band continuum Hamiltonian. Then we use the LDOS map to analyze how the QBSs' properties change with different potential barrier radius and strength. The Bernal ( A − B ′ ) stacked BLG is shown in Fig. 1a. Taking into account in-plane hopping parameter γ AB = γ A ′ B ′ ≡ t and inter-layer coupling parameter γ A ′ B ≡ t ⊥ for undoped BLG, four bands model can be obtained by considering one 2p z orbital on each of the four atomic sites in the unit cell ( A, B, A ′ , B ′ ) 2,23 . Near the Dirac point K and K ′ , the two low-energy bands are touched and can be approximated as E ± (k) = ± 2 k 2 2m , where m = |t ⊥ | 2v 2 is the effective mass, v = 10 6 m/s is fermion velocity of electron, and a ≈ 1.42Å is the nearestneighbour distance. We consider a CNPJ of gapless BLG and model a circular potential barrier with the step-like potential V = V (r)σ 0 = V 0 �(R − r)σ 0 , as shown in Fig. 1b,c. Focusing on the dynamics near a single Dirac point at K, the full two-band Hamiltonian is given by 38 : where p ± = p x ± ip y . The validity of Eq. (1) are discussed in Section "Discussion". To solve this equation, we start by writing the canonical momentum operators as p ± = i e ±iφ ∂r ± i r ∂φ . The Hamiltonian commutes with the pseudo angular momentum operator J z = L z + σ z due to the radially symmetric potential. Here, L z = (r × p) z , and σ z is the third Pauli matrix. Then, we need look for eigenfunctions of J z = L z + σ z with eigenvalues j = l + 1 = 1, 2, 3,…, where l = 0, 1, 2,…. Assuming wavefunction is where the phase factor e ±iφ is derived from the BLG Hamiltonian, the coupled eigen equations are obtained: (3) (4) by considering a scattering processes 25,39 , that an incident electron with energy E in BLG is scattered by a CNPJ created by a gate-induced circular potential barrier V(r). The incident plane wave in the n region can be written as a certain combination of cylindrical waves, then we utilize the scatter theory and the properties of and Bessel functions to get the wavefunctions outside the barrier (n region) as 25 : Within the barrier (p region), the regular eigenfunctions of the Hamiltonian H with energy E are 25 These functions are simultaneously eigenfunctions of H and J z , with eigenvalues E and j , respectively. Here, H (1) j , H (2) j are the Hankel functions of first and second kind, K j , I j are the modified Bessel functions, wavevectors in the n, p region, respectively. And we use α ′ = sgn(E) and α = sgn(E − V 0 ) to ensure the proper signs for electrons and holes. In the n region, H j , H j , and K j are effective eigenfunctions which are bounded for large arguments, and we disregard other eigenfunctions that diverge for large arguments. Similarly, in the p region, we consider J j and I j but ignore the other eigenfunctions which are divergent at the origin. Thus, the complete wavefunction can be written as 25 in the n and p region, respectively. The coefficients S j , A j , B j and C j can be obtained from the boundary conditions at the interface of the CNPJ: the wavefunctions and their derivatives at r = R are continuous Therefore, we can calculate the local density of states by LDOS(j, r, E) ∝ |Ψ (r, E)| 2 . In the n region, where In the p region, where (4) j+1 (k n r)e iφ e ijφ , i j (r, φ) = I j−1 k p r e −iφ αI j+1 k p r e iφ e ijφ . (10) ψ (n) j =h (2) j + S j h (1) j + A j k j , for resonant states at different angular momenta. As j increases, the resonant modes shift from the center of the quantum dot and gradually move outward, which acts similarly to those in monolayer graphene 10 . However, the QBSs in BLG for l = 0 are narrower compared to the l = 1 , l = 2 modes. This feature is different from monolayer graphene due to their different band structures 10,26 . The QBSs spectra can be measured experimentally via STM. Besides, we notice that the QBSs are formed in the p region, which can be regarded as BLG quantum dots. Figure 2b and c show the QBSs energy levels change with different potential barrier radius R and strength V 0 for the j = 1 mode at position r 0 = 10 nm. Here, the peaks of LDOS in the p region represent the energy levels of QBSs. In general, the higher barrier potential trap more QBSs along with the wider energy spacings. Likewise, larger bilayer graphene quantum dot can trap more QBSs with the narrower energy spacing. Additionally, the trapping time can be obtained through half-width of energy levels by τ = △E . For larger R and higher V 0 , QBSs can be trapped longer. These results suggest that we can confine specific energies and angular momentum modes by adjusting the potential size and depth. Note that the above calculations neglect valley mixing, for reasons explained in Section "Discussion". Energy spectra of quasi-bound states in magnetic fields In this section, we numerically solve the radial equation for a CNPJ of BLG in the presence of an external perpendicular magnetic field. Following, we focus on the case that the magnetic field is not sufficiently strong to make system fully evolve into Landau levels. In order to provide a simpler and more intuitive physical picture, we also give a semiclassical analysis of QBSs based on the WKB approximation at the end of this section. When a magnetic field is applied perpendicularly on the graphene surface, the orbital motion of electrons in two-dimension is quantized and the spectrum becomes discrete, called Landau levels. These Landau levels F + k p r =B j J j+1 k p r + C j I j+1 k p r Numerical solution under magnetic fields. To obtain the QBSs under the magnetic field, we start by solving the Eqs. (22) and (23) via the two sides finite difference method discretized in 600 sites in the interval 0 + < r < L ( L > R is truncation position). The initial wavefunction of finite difference method is ψ 0 at r ≃ 0 + , while ψ L at r = L . According to the finite difference method, ψ 0 and ψ L evolve with the formula from two sides to n-p junction boundary r = R . Then, analogy to the analytic method at B = 0 in Section "Model and quasibound states spectrum", we apply the boundary condition of ψ 0 and ψ L at r = R to obtain the new coefficients under the magnetic field 11,26,35 . To be specific, at r ≃ 0 + side, owing to eBr ≪ 2j−1 r and e 2 B 2 r 2 4 2 + eB(j−1) ≪ j 2 −1 r 2 , Eqs. (22) and (23) can be reduced to Eqs. (3) and (4) by neglecting the magnetic terms. Consequently, we can directly use the analytical solution of zero magnetic field case, a set of Bessel function [Eqs. (5)(6)(7)(8)(9)], as ψ 0 . For other side r = L , the magnetic field become dominant to give rise to the Landau level spectrum. Thus, we can not directly utilize the Eqs. (5)(6)(7)(8)(9) as initial wavefunctions at r = L . Here, we consider the low magnetic field case, and under the influence of disorder the wave function acquires a Lorentzian weight n △ 2 (ε−εn(B)) 2 +△ 2 on the zero magnetic field ψ 0 to produce ψ L . This treatment continuously returns to the zero magnetic field case, and reflects the Landau levels effect on large distance induced by magnetic field. Note that this approximation works well when the magnetic length l B = eB is comparable to barrier radii R. Because small l B will make the whole system evolves into Landau levels, which beyond our research. Here, △ = 0.5 meV is a broadening parameter from disorder scattering, ε n (B) = ω c √ n(n − 1) is spectrum of BLG under magnetic field 2 , and ω c = eB mc is the cyclotron frequency of non-relativistic electrons with effective mass m. Moreover, the results are proved to be insensitive to the details of the cutoff, as an example, we take a cutoff at L = 1.5R below. At relatively weak magnetic field B < 0.5 T, magnetic field only slightly shifts QBSs, as shown in Fig. 3a,b. Here, we plot the LDOS in logarithmic scale to clearly display the subpeaks induced by weak magnetic field. The energy shift is about 0.06 meV for 0.1 T, which evaluated from Fig. 3c. Furthermore, we notice the system has time-reversal symmetry at B = 0 T, which guarantees the degeneracy pair E K (j) = E K (−j) . However, the finite magnetic field break time-reversal symmetry of the system. The degenerate states of opposite angular momentum separate and the energy splitting enlarges as B increases. Specifically, with B increasing, the energies of QBSs for j = +n mode decrease while for j = −n increase. For relatively strong magnetic field, we plot the evolution of QBSs spectra under B = 1.1 T, 1.3 T and 1.7 T, as shown in Fig. 3d,e. Comparing with the weak magnetic case, a stronger magnetic field have a more obvious effect on the QBSs due to the appearance of Landau levels and this effect becomes larger as B increases. The Landau levels appear next to the QBSs energy levels, and it seems to be no arresting interplay between them from numerical results. The QBSs and Landau levels do not merge or simply repel but coexist in this transition region. The LDOS peaks are the superposition between the confined state and the Landau levels. If we continue to increase the magnetic field B, when it exceeds the critical magnetic field B c , the QBSs will disappear and the whole system will evolves into Landau levels. Here, B c can be evaluated from the magnitude of Landau levels and QBSs energy, and it is about 3 T at R = 40 nm for j = 1 mode. Therefore, the QBSs in a CNPJ of BLG can be tuned by adjusting the magnetic field strength as well. WKB approximation for zero and weak magnetic fields. Aforementioned numerical results cannot directly show the impact of particular parameters on the QBSs. To obtain a better understanding of numerical results, we can analyze the QBSs with semiclassical method based on WKB approximation. Reconsidering Eqs. (22) and (23), we firstly rewrite these equations in the matrix form 37,40,41 : www.nature.com/scientificreports/ (29) χ(r) = ψ(r)exp i y(r)dr . www.nature.com/scientificreports/ we can obtain the relation between energy level E n,j of QBSs and quantum number n. Here, we take phase factor δ = 0 , because the step-like circular potential can be regarded as two vertical barrier potential in radial direction, and this geometrical shape of boundary corresponds to δ = 0 44 . In Fig. 4, the blue curves in rightmost panels depict the relation of E j and n, and the red dots represent the position of QBSs. Setting B = 0 T in the above formula, the energy levels of QBSs are consistent with the rigorous results shown in Section "Model and quasi-bound states spectrum". Likewise, at relatively weak magnetic field B < 0.5 T, the WKB solution are in accordance with the numerical results in Section "Numerical solution under magnetic fields". These results verify the availability of the WKB approximation. Thus, WKB approximation provides an easier way for predicting the QBSs energies. Discussion Throughout the above analysis of CNPJ, we have modeled the electrostatic potential as a step-like function of position. This assumption is justified if a ≪ R ensures the absence of inter-valley scattering at the interface, where a is the lattice constant and R is the characteristic length representing the width of the transition region between the junction's n and p sides. The inter-valley scattering is inevitable in experiments, but as investigated in Fig. 3 of Ref. 15 , the authors demonstrated that the inter-valley scattering caused by the step potential is very weak and QBSs are nearly insensitive to the smoothness of boundary. These results are in good agreement with those of experiments 10,13,14 . These previous investigations justify the validity of our approximation. Regarding the feasibility of the two-band Hamiltonian in Eq. (1), there are two points needs to address. Firstly, we require E F < t ⊥ /2 ≈ 200 meV to ensure the quadratic dispersion relation holds. Secondly, we neglect the trigonal warping term. Thus, our model is reliable for quasi-particles in the energy regime under 200 meV 25 . Furthermore, we calculate the LDOS as a function of energy derived from the two-band Hamiltonian and fourband Hamiltonian to verify the validity of the two-band Hamiltonian in Eq. (1), as shown in Fig. 5. Besides, the QBSs of CNPJ in our study differ from those of Coulomb potential. The QBSs of long range Coulomb potential exhibit the dramatic property of discrete scale invariance 45,46 . In contrast, the circular potential added on our system is confined potential and the QBSs of them don't show the discrete scale invariance. Thus, the results of these two kinds of potential are different in the qualitative and quantitative studies. Moreover, the construction of circular np junction in graphene has already been achieved experimentally, and our theoretical results can be useful for qualitative analysis to them. Summary In this paper, we have studied the quasi-bound states in a circular n-p junction of bilayer graphene and their evolution under the magnetic field numerically. We have shown that the quasi-bound states spectra can be controlled by adjusting the potential barrier radius and strength. These energy spectra are quantitatively different www.nature.com/scientificreports/ from those for monolayer graphene due to different band structures. We also have demonstrated that the energy level degeneracy of opposite angular momentum states breaks under magnetic field and the energy splitting enlarges as magnetic field strength increases. Moreover, applying weak magnetic fields on system leads to slight shift of quasi-bound states. While the strong magnetic fields induce additional resonances beside the quasi-bound states. These additional resonances originate from the Landau levels. The evolution of quasi-bound state spectra under magnetic field is also supplemented with semiclassical analysis based on the WKB approximation. Our results are highly relevant to recent experiments and can be verified in STM measurement. www.nature.com/scientificreports/
2020-10-03T05:09:11.433Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "9799ab09819d13c8296921fa2e1aab311e45a9bc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-73377-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9799ab09819d13c8296921fa2e1aab311e45a9bc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
8435456
pes2o/s2orc
v3-fos-license
Genome management and mismanagement—cell-level opportunities and challenges of whole-genome duplication Whole-genome duplication (WGD) doubles the DNA content in the nucleus and leads to polyploidy. In this review, Yant and Bomblies discuss both the adaptive potential and problems associated with WGD, focusing primarily on cellular effects. Whole-genome duplication (WGD) doubles the DNA content in the nucleus and leads to polyploidy. In whole-organism polyploids, WGD has been implicated in adaptability and the evolution of increased genome complexity, but polyploidy can also arise in somatic cells of otherwise diploid plants and animals, where it plays important roles in development and likely environmental responses. As with whole organisms, WGD can also promote adaptability and diversity in proliferating cell lineages, although whether WGD is beneficial is clearly contextdependent. WGD is also sometimes associated with aging and disease and may be a facilitator of dangerous genetic and karyotypic diversity in tumorigenesis. Scaling changes can affect cell physiology, but problems associated with WGD in large part seem to arise from problems with chromosome segregation in polyploid cells. Here we discuss both the adaptive potential and problems associated with WGD, focusing primarily on cellular effects. We see value in recognizing polyploidy as a key player in generating diversity in development and cell lineage evolution, with intriguing parallels across kingdoms. Whole-genome duplication (WGD) has played a role in the evolution of all major eukaryotic lineages and can involve single somatic cells or entire organisms. At the whole-organism level, WGD has been linked to phenotypic novelty, speciation, and adaptation as well as the evolution of genomic complexity (e.g., see Levin 1983;Schemske 1998, 2002;Otto and Whitton 2000;Soltis et al. 2003;Doyle et al. 2008;Hegarty et al. 2013). However, WGD also occurs in somatic cells in otherwise diploid organisms, where it plays important roles in normal development and likely also in inducible wound repair and stress responses. Polyploid cells can also be a hallmark of aging and disease and may be intermediates in the progression of many tumors, where they increase genetic and karyotypic diversity (e.g., see Storchova and Pellman 2004;Storchova et al. 2006; Thorpe et al. 2007;Storchova 2014;Coward and Harding 2014). A role in disease, while detrimental to the organism, nevertheless highlights the adaptive potential of genome duplication at the level of cell lineages. Despite their adaptive potential, proliferating polyploid cell lineages or organisms face challenges, particularly to chromosome segregation in both mitosis and meiosis, which we discuss below. In most of the cases in which WGD is associated with pathological outcomes, this seems to arise from the propensity for chromosome missegregation, thus emphasizing that understanding both the nature of the problem and how evolution might confer adaptation to it in some cases is important. In this review, we discuss how the adaptive potential and cellular novelty provided by genome duplication can contribute to normal development, environmental responses, and disease states. We discuss challenges that polyploid cells or organisms face, especially with chromosome segregation, and how this might relate to some of the risks associated with unplanned polyploidy. We focus exclusively on within-species polyploidy (autopolyploidy), as this is relevant in a wide range of situations from whole-organism polyploidy to cellular WGD in normal development and disease. We do not discuss allopolyploids, which arise from hybridization coupled with WGD; interested readers are referred instead to thorough reviews that cover this topic (e.g., see Schemske 1998, 2002;Otto and Whitton 2000;Soltis et al. 2003;Doyle et al. 2008;Gaeta and Pires 2009;Hegarty et al. 2013). While the underlying biologies of different systems are, of course, distinct, we support the idea that considering potentially informative parallels across systems can provide new testable hypotheses in a range of fields. Somatic WGD Polyploid cells are a normal part of development in otherwise diploid plants and animals and play beneficial and often essential roles arising from the phenotypic novelty of these cells compared with their diploid counterparts. Somatic polyploid cells can arise either by cell fusion or when cell division aborts before cytokinesis (for review, see Nagl 1982;Edgar and Orr-Weaver 2001;Edgar et al. 2014). Two primary paths by which somatic polyploid cells are generated-endocycling and endomitosis-differ in timing of cell cycle exit (Fig. 1A), which has important consequences for a cell's capacity for future division. Endocycling cells entirely skip mitosis (M phase) and have only S and G phases (Fig. 1A, a,b). Sometimes S phase is truncated, in which case chromosome replication may be incomplete (Fig. 1A, a), resulting in chromosomes that cannot properly separate and segregate in mitotic division (Fig. 1B;Nagl 1982;Edgar and Orr-Weaver 2001;Edgar et al. 2014). Endomitosis is distinct from endocycling in that the process aborts later in the cell cycle (Fig. 1A,c,d) and thus has at least some mitotic features, including completion of chromosome replication, chromosome condensation, nuclear envelope breakdown, and sometimes even spindle formation (for review, see Nagl 1982;Edgar and Orr-Weaver 2001;Edgar et al. 2014). Because partial mitotic progression results in complete chromosome replication and separation, endomitotic cells can better retain the ability to re-enter mitosis than endocycled cells (Nagl 1982). Endopolyploid cells have provided lessons in how WGD can alter the biology of cells, highlighting the important roles ploidy variation can play in development, stress resilience, and disease. The ability to generate endopolyploid cells seems to have re-evolved many times and is likely an important adaptation in those tissues or cell types where mitotic division would be deleterious for structural reasons, when rapid growth or large cell size are required, or to allow cell survival when DNA damage makes mitotic division untenable (for review, see Vinogradov et al. 2001;Edgar et al. 2014;Orr-Weaver 2015). Although there is clearly variation in the biology of endopolyploid cell types in different tissues or species, they do share several important consistent features, such as increased cell size and perhaps altered growth potential and physiology, which we discuss below (e.g., see Levin 1983;Butterfass 1987;Galitski et al. 1999;Sugimoto-Shirasu and Roberts 2003;Barow 2006;Orr-Weaver 2015;Scholes and Paige 2015). Big cells and rapid growth-developmental roles of somatic polyploidy Polyploid cells often arise in diploids as a normal and regulated part of development. Examples include cells in the blood-brain barrier in insects, where tissue expansion is necessary but mitotic division would disrupt critical septate junctions, and cardiomyocytes, where mitotic division can destroy important intracellular structures (for review, see Orr-Weaver 2015). In placental trophoblast cells, endopolyploidy is important for invasive and nutritive functions (Parisi et al. 2003), and in megakaryocytes and glial cells, it is important for achieving their very large cell sizes (Orr-Weaver 2015). In moths and butterflies, large color-carrying wing scale cells are also endopolyploid (Cho and Nijhout 2013). In plants, there are also numerous examples of specialized large endopolyploid cells such as cotton fibers, cells in the pericarp of tomato fruits, giant cells in Arabidopsis sepals, leaf hairs, and cells essential for the formation of nitrogen-fixing nodules of legumes (for review, see De Veylder et al. 2011;Roeder et al. 2012). In all of these cases, endopolyploidy is a tightly regulated aspect of development and generally a terminally differentiated state. Its wide taxonomic distribution indicates that organisms have evolved to modulate ploidy to fit their needs, suggesting that variation in DNA content and/or cell size via endopolyploidy is adaptive in most multicellular eukaryotes. Conversely, symbionts and parasites can also induce endopolyploidy in their hosts; for example, at "nutrient exchange sites" in plants (for review, see Wildermuth 2010). Genome duplication can result from cell cycle truncations at any point after DNA replication has commenced but before cytokinesis fully divides cells. Different exits have distinct effects on cell biology and replicative potential. Endocycles (a and b) exit the cell cycle prior to mitosis; early exit prior to completion of S phase (a) leads to incomplete chromosome replication focused mostly on euchromatic regions. Cells that remain capable of mitosis have full-length S phases (b) (e.g., see Fox et al. 2010). Exit after M phase has begun (c) allows chromosome separation, while exit after spindle formation (d) likely contributes to nuclear shape changes (Castellano and Sablowski 2008). Adapted with permission from Macmillan Publishers Ltd.: Nature Reviews Molecular Cell Biology (Edgar et al. 2014), # 2014. (B) Possible architecture of chromosomes with partial DNA replication, showing amplified euchromatic regions and underreplicated heterochromatic regions (see Nagl 1982;Edgar et al. 2014). Endopolyploidy can also be inducible by variable conditions; e.g., when rapid growth is required or under stress conditions. Again, in most instances, these responses are tightly regulated normal processes. For example, a likely important conditional role for somatic polyploidy is in wound repair. Here endopolyploidy likely becomes important because polyploid cells can grow rapidly into vacant space without time-consuming mitotic divisions (for review, see Edgar et al. 2014). In humans, well-healing wounds have abundant tetraploid cells, while chronic wounds lack them (Ermis et al. 1998;Oberringer et al. 1999). Endopolyploidy is also involved in wound healing in Drosophila melanogaster, where polyploidy and cell fusion were shown to be directly important in repairing damaged epithelium at wound sites (Losick et al. 2013). This effect may also translate to whole-organism polyploids: A tetraploid morph of a New Zealand snail has faster wound repair than its diploid counterpart, although whether this arises directly from its polyploid state rather than an associated effect remains to be seen (Krois et al. 2013). In plants, somatic genome duplication is linked to another kind of wounding response called overcompensation. Plants with this trait respond to herbivory by regrowing larger than before, often producing greater seed yield than undamaged controls (Scholes and Paige 2014). Recent work in Arabidopsis thaliana shows that strains with higher proportions of endopolyploid cells can overcompensate more effectively than those with fewer, and increasing the ability of strains to endocycle increases their ability to overcompensate Paige 2014, 2015). Although functionally distinct from localized wound repair in animals, the systems share a need for generating rapid tissue growth, and this may explain the shared reliance on endopolyploidy as a mechanism. Another context in which inducible endopolyploidy seems to be important is stress response and resilience, where increased levels of endopolyploidy have been hypothesized to confer direct benefits (for review, see De Veylder et al. 2011;Schoenfelder and Fox 2015;Scholes and Paige 2015). However, in yeast cultures, isogenic diploid and tetraploid cells do not differ in stress tolerance, showing that stress tolerance is not a universal feature of polyploid cells (Andalis et al. 2004). Nevertheless, a number of interesting correlations have been found that hint that somatic WGD may contribute to stress resilience in at least some multicellular organisms. For example, in plant species Medicago and sorghum, root endopolyploidy is associated with salt tolerance (Ceccarelli et al. 2006;Elmaghrabi et al. 2013) and can be induced by salt in tolerant, but not sensitive, strains of sorghum (Ceccarelli et al. 2006). This suggests that the ability to induce endopolyploidy may be directly responsible for the resistance to salt, likely due to cell size changes in the roots that could alter ion uptake. Higher proportions of endopolyploid cells also contribute to greater drought tolerance in plants (Cookson and Granier 2006). Levin (1983) pointed out that biochemical and physiological changes that follow from WGD may also play important roles in adapting polyploid organisms to novel habitats. Indeed, there are hints that at least some of what has been demonstrated for endopolyploidy in plants might translate to whole-organism effects: For example, autotetraploid rice has greater resilience to drought and lower superoxide levels than diploid rice (Yang et al. 2014), and A. thaliana autotetraploids have greater salt tolerance than isogenic diploids (Chao et al. 2013). Just as with endopolyploidy, these effects may arise at least in part from larger cell size. In animals, endopolyploidy in the liver was reported to increase after injury or toxic exposure, and WGD in this case was proposed to be a direct response to stress, since treatments that attenuate oxidative stress also reduce endopolyploidy (Gentric and Desdouets 2014). It has been argued that these polyploid cells can sometimes continue to divide and that aneuploidy resulting from chromosome segregation problems might be adaptive for the selectable diversity that it provides in toxic liver environments (Duncan et al. 2010). However, aneuploid cells also have the potential to give rise to pathologic cell lineages, suggesting that this would be a risky strategy at best (Gupta 2000). Furthermore, a recent single-cell resequencing study found evidence that aneuploidy is actually very rare across normal mammalian tissues, including the liver, suggesting that aneuploidy is not an adaptive feature of organ function and remains characteristic only of disease states (Knouse et al. 2014). In both animals and plants, endopolyploidy is also implicated in resilience to DNA damage. DNA doublestrand breaks trigger a signaling cascade that can switch off mitosis and trigger the initiation of endocycling (Ciccia and Elledge 2010; Adachi et al. 2011;Davoli and de Lange 2011). Endocyling after DNA damage is likely important for preventing cell death while halting mitotic proliferation of DNA-damaged cells (Ciccia and Elledge 2010;Adachi et al. 2011;Davoli and de Lange 2011). It could also provide a protective buffer against complete gene loss or haploinsufficiency due to the presence of additional DNA copies. An effect on resilience to DNA damage is also evident in A. thaliana, where increasing the proportion of endopolyploid cells in diploids as well as whole-organism tetraploidy both increase UV tolerance (Hase et al. 2006;Gegas et al. 2014). However, resistance to DNA damage of polyploid cells is certainly not universal: Isogenic ploidy series in yeast have shown that higherploidy cells in many cases do not have higher DNA damage resistance than diploids, and polyploids are actually more sensitive to some DNA-damaging agents (for review, see Storchova and Pellman 2004). This raises the possibility of differences among species or among types of DNA damage as to whether polyploidy is beneficial. It has been proposed that endopolyploidy might directly affect the physiology and metabolic state of cells (Lee et al. 2009), perhaps due to the altered scaling ratios that accompany WGD-associated cell size increases (Weiss et al. 1975;Cavalier-Smith 1978;Levin 1983;Galitski et al. 1999;Storchova and Pellman 2004). Metabolic changes are evident and particularly well studied in cancers, and while, of course, cancers are highly complex and heterogeneous, in some cases, these shifts may be directly linked to polyploidy. Cancer cells commonly show increased reliance on glycolysis and resilience to hypoxic conditions, a set of traits called the Warburg effect (see e.g., Kim and Dang 2006;Vander Heiden et al. 2009). It has been shown that increased glycolysis precedes tumor hypoxia and thus seems to be a facilitator of tumor growth rather than a consequence of it (Vander Heiden et al. 2009). Recently, it was suggested that polyploid, but not diploid, tumor cells show the Warburg traits of resistance to hypoxia and reliance on glycolysis (Zhang et al. 2013). Why this is and whether it arises as a direct or indirect consequence of WGD are unclear, but similar patterns are seen elsewhere: Inhibition of Aurora kinases in acute myeloid leukemia cells also induces polyploidy, again coupled with increased glycolysis (Liu et al. 2011), and, in glioblastoma, polyploid cells similarly show increased glycolysis and are hypersensitive to glycolysis inhibitors relative to their diploid counterparts (Donovan et al. 2014). Other metabolic effects may also be linked to WGD: Aspirin and resveratrol selectively target polyploid cells by activating AMP kinase, a core sensor of cellular energy whose hyperactivation tetraploid cells are more sensitive to than diploid cells, for unknown reasons (Lissa et al. 2014). Whether similar metabolic shifts occur in endopolyploid cells in other systems will be interesting to explore. In this light, it is intriguing that, in Arabidopsis arenosa, a screen for selection after organismal WGD identified an AMP kinase-like protein of unknown function as having been under strong selection (Yant et al. 2013), perhaps reflecting a need for metabolic retuning after WGD, although the function of the selected alleles remains to be tested. Are there costs of endopolyploidy? While regulated endopolyploidy can clearly be beneficial, in some circumstances, there may also be costs. In particular, induced or spontaneous polyploidy can have risks associated with the unscheduled resumption of mitosis, as emphasized by the presence in animals of checkpoints that have evolved specifically to limit the proliferation of polyploid cells . While endopolyploid cells are often terminally differentiated, some do remain capable of mitosis. For example, megakaryocytes, which give rise to platelets, are sometimes mitotically competent and can give rise to still-polyploid daughter cells (Leysi-Derilou et al. 2014). A big potential cost of allowing endopolyploid cells to return to mitosis is the risk of chromosome missegregation. In Drosophila and Culex, for example, polyploid cells in the digestive tract and rectum remain capable of mitosis, but these divisions often produce aneuploid progeny cells (Fox et al. 2010). Whether these cause trouble for the organism is not known, but they nevertheless highlight that mitosis from polyploid cells can be problematic. In the mammalian liver, polyploid cells can sometimes also continue to proliferate, but these also experience challenges with chromosome segregation and spindle geometry (Gentric and Desdouets 2014). The danger of mitotic divisions in endopolyploid cells is further highlighted by the observation that tumorigenesis can be promoted by re-entry to mitosis of DNA-damaged polyploid cells, which produces proliferative aneuploid populations (Davoli and de Lange 2011;Edgar et al. 2014). We discuss challenges associated with polyploid chromosome segregation in more detail below. In normal development, there may also be costs associated with cell size increases and the usual irreversibility of polyploidy (Orr-Weaver 2015; Scholes and Paige 2015). For example, it has been proposed that high proportions of polyploid cells in an organ may decrease "metabolic scope" (the ratio of maximum to basal metabolic rate), suggesting that tissues with more polyploid cells have a poorer "safety margin" when operating at high capacity (Vinogradov et al. 2001). Furthermore, because endopolyploid cells cannot generally resume mitotic division (although they can continue to endocycle), the proliferative capacity of tissues with high proportions of endopolyploid cells can be limited. For example, endopolyploidy that occurs in response to toxicity or injury in the liver is thought to limit long-term regenerative capacity and may be a major factor in aging (Gupta 2000). The problem of limited mitotic potential could be particularly prominent in fluctuating environments, where one set of conditions may favor endopolyploidy, but, as conditions change, endopolyploidy could become disfavored, at which point it cannot generally be undone (Scholes and Paige 2015). Another potential problem associated with endopolyploidy is that surface to volume ratio decreases with WGD-driven cell size increases. This could result in a problem of communication-a decrease in the efficiency of nuclear-cytoplasmic transport (Orr-Weaver 2015; Scholes and Paige 2015). That cell size matters for polyploid cells is supported by direct comparisons of isogenic diploid and tetraploid yeast cells, where expression differences attributable purely to a ploidy shift included mostly genes whose expression is responsive to cell size independent of DNA content (Galitski et al. 1999;Wu et al. 2010). In some cells, the shift in surface to volume ratio may be compensated to some extent by alterations in nuclear shape that increase surface area, such as deep indentations in the nuclear membrane and/or flattening of the nucleus (Nagl 1982;Buntrock et al. 2012;Pirrello et al. 2013;Orr-Weaver 2015). Indeed, in tomato fruits, complexity of nuclear shape correlates positively with cell ploidy (Nagl 1982;Pirrello et al. 2013), and highly endopolyploid cells in a moth have bizarre giant nuclei that are flat and elaborately branched (Buntrock et al. 2012). These shape changes have been proposed to increase nuclear-cytoplasmic communication, but whether they are actually important for the cells in which they occur is not clear. It could be that nuclear shape modification is an adaptive and regulated response to polyploidy, but it may also be incidental to the endoreduplication process: In A. thaliana mutants with partially disrupted mitotic spindles, endopolyploid cells form with complexly branched nuclei. This distortion is thought to arise because the nuclei in these cells have been subjected to repeated but ultimately unsuccessful application of spindle forces (Castellano and Sablowski 2008). These findings suggest that complex nuclear shapes could arise sporadically but, by lessening the surface area to volume challenge, could nevertheless be advantageous in cells with high ploidy. WGD and adaptation in cell lineages Some proliferating polyploid cell lineages (in tissue culture or tumors, for example) persist, thrive, and diversify, highlighting the adaptive potential of WGD. What is the nature of that potential? Experimental evolution studies have probed the adaptive potential of WGD in cell lineages, and there are good examples of immediate as well as longer-term fitness effects resulting from ploidy shifts (e.g., see Gerstein and Otto 2009). Here we discuss examples from fungi, plants, and animals. Fungi Natural variation for ploidy exists in Saccharomyces cerevisiae, with haploids, diploids, and tetraploids endemic to the same microsite, suggesting that ploidy variation could play an adaptive role (Ezov et al. 2006). The ploidy state flexibility of yeast allows the study of strains that are genetically identical except for ploidy, a useful tool for directly plumbing the effects of WGD in evolving populations (Galitski et al. 1999). In stationary culture (nutrient-limited conditions) in yeast, tetraploidy was detrimental to cell survival. Although tetraploid cells could sense nutrient deprivation just as diploid cells could, they did not respond by aborting mitosis and instead continued to proliferate, which, in these conditions, was lethal (Andalis et al. 2004). Leveraging a similar approach, Selmecki et al. (2015) compared the longer-term evolvability of otherwise genetically identical haploids, diploids, and tetraploids and found that evolutionary adaptation to a poor carbon source medium was significantly faster in tetraploids than in diploids. Population genomic resequencing, modeling, and phenotyping indicated that more beneficial mutations arose in autotetraploids than in diploids and that these also have stronger fitness effects (Selmecki et al. 2015). This study provides some of the clearest evidence to date of the greater adaptability of polyploid lineages. However, these results, taken together, also highlight that polyploidy is advantageous in some, but not all, situations. Plants Most studies of ploidy dynamics in plant cell lineages focus on either callus (three-dimensional growths of "dedifferentiated" cells on solid medium) or liquid cell culture. Results from these studies also highlight potential adaptive roles for ploidy shifts as well as the potential influence of environment on whether ploidy is beneficial. Long-term callus growth of a broad range of plant species indicates a common progression over time from diploid to tetraploid cells followed by devolution into aneuploid swarms (e.g., see Murashige and Nakano 1967;Torrey 1967;Singh and Harvey 1975). Importantly, although highly repeatable in callus, a trend to higher ploidy is not universal in plant tissue culture: Liquid-grown suspended cultures in Haplopappus gracilis and Haplopappus ravenii are exclusively made up of diploid cells, while callus growths of the same species are quickly dominated by polyploid cells (Singh and Harvey 1975). This underscores that whether polyploidy is adaptive for cells depends on growth conditions. It has been noted that the progression from diploid to tetraploid and aneuploid cells seen in callus cultures is at least superficially similar to events in neoplastic progression in carcinogenesis (e.g., see Gaspar et al. 1991;Häsler et al. 2012). To what degree these similar paths are (or are not) driven by similar selection pressures in tumors and callus culture remains to be seen. One shared feature may be a selection for tolerance to hypoxia in crowded tumor or callus conditions, as there is some evidence that tolerance to hypoxia may be higher in polyploid cells than diploid cells, at least in humans (Zhang et al. 2013). Whether aneuploidy that follows from tetraploidy in callus growth is selectively advantageous to cell lineages in this context or merely an unavoidable consequence of mitotic divisions in tetraploid calli is not clear. In any case, attempts to generate embryos from callus are usually successful in both the diploid and tetraploid stages but progressively fail as calli become aneuploid, suggesting that, while tolerated in culture, aneuploidy is detrimental to multicellular development, while both diploidy and tetraploidy are tolerated (Torrey 1967;Gaspar et al. 1991). In those cases where regeneration does succeed from aneuploid calli, the plants that result are nevertheless euploid (fully diploid or tetraploid), suggesting that there is strong selection for euploid lineages in multicellular development (Feher et al. 1989;Raja et al. 1992). Understanding the dynamics of this process and the nature and strength of selection against aneuploidy in multicellular plant growth will be very interesting, especially in light of the fact that aneuploidy in plant leaves is not incompatible with cell survival (e.g., see Wright et al. 2009). Mammals In mammals, the anarchic proliferation that characterizes within-host cancer evolution commonly includes a high diversity of aneuploid cell lineages associated with disease progression, and at least some of these are thought to arise via chromosome missegregation from tetraploid intermediates (for review, see Davoli and de Lange 2011;Burrell and Swanton 2014;Gerlinger et al. 2014b). Tetraploid intermediates may be quite common and have been suggested to occur in >50% of liver adenocarcinomas and ∼30% of pancreas and lung adenocarcinomas, cervical carcinomas, neuroblastomas, and Hodgkin's lymphomas (see references in Davoli and de Lange 2011). While polyploid intermediates can arise spontaneously, viruses linked to cancer can also trigger this progression. For example, viral-induced cell fusion can trigger the proliferation of autotetraploid human cells if oncogenes or a mutated version of p53 are expressed (Duelli et al. 2005(Duelli et al. , 2007. Human papilloma virus (HPV) infection promotes cell fusion in humans (Hu et al. 2009) and mice (Gao and Zheng 2010) and contributes to the etiology of cervical cancer. A limitation to understanding the role that polyploid cells might play in tumorigenesis has been that it is rarely possible to observe very early events in human cancer etiology. However, Barrett's esophagus provides such a glimpse. This condition predisposes to esophageal adenocarcinoma but is recognizable before neoplastic progression (Galipeau et al. 1996). Biopsies containing elevated quantities of tetraploid cells portend inactivation of the p53 tumor suppressor, disease progression, and the onset of gross aneuploidy. This provides direct evidence that even though some tissue abnormalities were already present, tetraploidy preceded aneuploidy and disease progression (Galipeau et al. 1996). Increased frequency of polyploid cells also occurs during the progression of cervical (Olaharski et al. 2006), breast (Dutrillaux et al. 1991), and other cancers (for review, see Davoli and de Lange 2011). This raises the hypothesis that unstable tetraploid intermediates, even if not primarily causal, can facilitate the generation of highly aneuploid malignancies. Beyond aneuploidy, cancer cells have a large array of additional problems with the maintenance of genome stability (Box 1). That polyploidy may serve as an intermediate promoter of tumorigenesis has empirical support from the observation that tetraploid, but not diploid, p53-null mouse mammary epithelial cells promote tumorigenesis when transplanted into immune-compromised mice (Fujiwara et al. 2005). How exactly WGD might facilitate tumor progression is an open question. This is no surprise given the range of evolutionary trajectories that characterize diverse cancer types and the difficulty of obtaining a truly representative picture of the evolutionary paths that cell lineages follow within a single tumor (Gerlinger et al. 2014b;Walther et al. 2015). Beyond the unresolved population genetic considerations of the effects of variation in a polyploid context (e.g., see Otto and Whitton 2000;Gerstein and Otto 2009), on the phenotypic side, recent work suggested that polyploidy may indeed promote rampant aneuploidy and genetic and phenotypic diversity (Lagadec et al. 2012;Erenpreisa and Cragg 2013;Zhang et al. 2013). Furthermore, polyploid giant cells, which are often found in human solid tumors, are highly resistant to low oxygen conditions, cycle slowly (Zhang et al. 2013), and have been proposed to also contribute to lineage expansion and heterogeneity upon induction by chemotherapy or radiation treatment (Lagadec et al. 2012;Zhang et al. 2013). The data available to date thus suggest that polyploidy could substantially contribute to cell lineage diversity and adaptability in diverse situations. Polyploid cell lineages as evolving populations Genome duplication increases the number of available alleles for mutations to accumulate and upon which selection can then act (Otto 2007). The doubling of segregating alleles at each locus means that recessive alleles can be better masked in autotetraploids, allowing for the retention of greater allelic diversity and likely also greater del-eterious genetic load than in diploids (Otto 2007). Because nascent polyploid lineages instantly differ in at least some respects (e.g., cell size) from their diploid progenitors, alleles suddenly find themselves "a stranger in a strange land," and thus even pre-existing variants experience novel selection pressures. This alone may increase phenotypic diversity. Adding to this are the genomic instabilities that arise when mitotic progression resumes in neo-autopolyploids (whether individual cells or entire organisms), which may generate additional genetic diversity via chromosome rearrangements, insertions, and deletions. Aneuploidy not only creates novel variation in its own right but can also expose retained recessive mutations on the remaining chromosomes as additional sheltering copies are lost. Genetic diversity in tumors can predict progression to disease (Maley et al. 2006), likely because selection for phenotypic novelty can arise rapidly in a tumor environment, where heterogeneity develops as distinct lineages encounter variable microenvironmental selection pressures (Marusyk et al. 2012). An emerging view of tumors is one of heterogeneous "communities" that consist of a diversity of highly branched evolutionary trajectories (Gerlinger et al. 2014a). Recent whole-genome resequencing of cell subpopulations from a single tumor sorted by DNA content emphasized tumor complexity and that both sequence and structural variation are common (Malhotra et al. 2015). The potential dangers associated with diversity and polyploidy are highlighted by a study of multiple tumor regions in a single patient with kidney cancer. The cell sublineage in the primary tumor that was most similar to those metastatic sites with the greatest chromosomal instability consisted of tetraploid cells, while remaining regions were diploid, raising the possibility that polyploidy may have contributed to metastatic potential in this case (Gerlinger et al. 2012). Another recent study gave additional evidence that WGD in human colon tumor cell lines promotes tolerance of chromosome abnormalities relative to isogenic diploids. Independent autotetraploid lines showed convergent changes, including repeated losses of a region of chromosome 4q that is commonly absent in colorectal cancers in vivo (Dewhurst et al. 2014). Taken together, there is now considerable evidence to support the idea that tumor transformation involves aneuploidy, with tetraploidy sometimes serving as an intermediate "gateway state" that both promotes diversity and aneuploidy and provides greater tolerance of them Storchova et al. 2006;Thorpe et al. 2007;Storchova 2014;Coward and Harding 2014). There will likely be much to gain from considering cancer, at least in some respects, as a potentially predictable evolutionary process of heterogeneous cell lineages and considering polyploidy and the aneuploidy that can follow from it as central engines of diversity in that process (Coward and Harding 2014). Polyploid chromosome segregation One of the most repeatable costs of WGD is the instability of chromosome segregation. In whole-organism polyploidy, chromosome segregation problems are strikingly evident in meiosis, where failures in the sorting of additional homologous chromosomes can lead to infertility or developmental abnormalities in progeny. Similarly, mitosis in polyploid cells also often leads to aneuploidy. Below we discuss chromosome segregation problems that polyploids face in meiosis and mitosis in turn. Polyploid meiosis Segregating additional chromosomes in meiosis is a vexing challenge for newly formed autopolyploids. When more than two homologs are present in a meiotic cell, they can form aberrant associations called multivalents among more than two of the available homologs, which can cause segregation problems (e.g., for review, see Ramsey and Schemske 2002;Comai 2005;Bomblies and Madlung 2014). Both the nature of the problem and the solutions that can evolve are most extensively studied in plants, where whole-organism polyploids are especially common. Many newly formed autopolyploids exhibit extensive multivalent formation, often coupled with reduced fertility, whereas most established autopolyploids form primarily or exclusively diploid-like bivalents (e.g., see Shaver 1962;Charpentier et al. 1986;Wolf et al. 1989;Srivastava et al. 1992;Ramsey and Schemske 2002;Santos et al. 2003;Yant et al. 2013). Even in shorter-term selection experiments for fertility in newly generated autopolyploids, quadrivalent frequency declined and bivalent frequency increased in several generations in, e.g., Hyoscyamus albus (Srivastava and Lavania 1990), Pennisetum typhoides (Jauhar 1970), Zea mays (Gilles and Randolph 1951), Secale cereale (Bremer and Bremer-Reinders 1954;Hilpert 1957), and A. thaliana (Santos et al. 2003). There are a handful of exceptions to the general trend of reduced crossover formation in autotetraploids. In several related newly autopolyploid grasses, fertility is positively correlated with chiasma number Box 1. Engines of genome diversification Aggressive aneuploid cancers, some of which may arise via tetraploid intermediates, exhibit striking genomic modifications, some of which are recognized in many systems, while others are specific to the cancer literature. Recent work has demonstrated that these diversifying processes engender the establishment of genetically rich cell communities upon which natural selection acts (Stephens et al. 2011;Nik-Zainal et al. 2012a,b;Baca et al. 2013). The recognition of most of these processes challenges the view that cancer evolution is simply a process of gradual serial mutation accumulation, arguing instead that punctuated or saltational evolutionary trajectories can also be important (Baca et al. 2013;Lazebnik 2014). Several dramatic examples of the types of genome modification that have been reported in cancer evolution include the following: Chromoplexy (from the Greek pleco, to weave or braid): A phenomenon of complex genome restructuring in which DNA translocations and deletions emerge in a highly interdependent manner; observed first in prostate cancers, where it frequently accounts for dysregulation of important cancer loci (Baca et al. 2013). It appears to disrupt multiple cancer genes in a coordinated fashion, and the level of chromoplexy is correlated with tumor histological grade (Baca et al. 2013) Chromothripsis (from the Greek thripsis, shattering): A cataclysmic burst of genome rearrangement during a single cell cycle, in which hundreds of genomic rearrangements occur in a one-generation crisis usually focusing on one or a small number of chromosomes (Stephens et al. 2011). Kataegis (Greek for shower or thunderstorm): A localized storm of hypermutation that normally colocalizes with somatic rearrangements; common in breast cancer cells (Nik-Zainal et al. 2012a). It is sometimes associated with arrangements that have features of chromothripsis. Aneuploidy: The state of harboring a chromosome complement that differs from simple multiples of haploid chromosome sets. This can be greater or less than the diploid quantity. Aneuploidy can provide a strong selective advantage, e.g., in response to multiple environmental stressors in yeast (Rancati et al. 2008). Tetraploid cells commonly missegregate chromosomes on account of their supernumerary centrosomes (Ganem et al. 2009), readily generating subclones with aneuploid chromosome complements. Aneuploidy is a hallmark of cancers found in ∼90% of solid tumors and 50% of blood cancers (Beroukhim et al. 2010 Chromosomal instability (CIN): A persistently elevated rate of chromosome gain/ loss common in many cancers that leads to aneuploidy (Lengauer et al. 1997). Expression of meiosis genes: Meiosis genes, normally expressed in the mammalian germline, are misexpressed in many cancers. Genome instability, seen commonly as part of the neoplasmic phenotype, could be caused by an admixture of mitotic and meiotic complexes . Masking: Increased allelic redundancy in polyploid genomes covers the effect of deleterious mutations (as they are less likely to be homozygous and are often a smaller proportion of the allelic complement). It has been suggested that this aspect of tetraploidy would be especially beneficial in the face of a mutator phenotype, as encountered in many cancers (Davoli and de Lange 2011). In addition to buffering potentially deleterious single-nucleotide polymorphisms (SNPs), polyploidy may also buffer the effect of rampant aneuploidy found in cancers (Varetti et al. 2014). This is potentially an opportunity, as retained alleles may provide low-frequency variants upon which selection can act as a sublineage encounters novel tumor microenvironmental challenges. and quadrivalent formation (Myers 1945;Müntzing 1951;McCollum 1958;Hazarika and Rees 1967;Simonsen 1975). These species are unusual in two important ways: (1) They form quadrivalents with only terminal chiasmata that disjoin regularly, and (2) unlike in other species, a decline in crossover frequency in these species leads to increased univalent frequency, which is strongly linked to infertility. Thus, the main selective force in these species seems to be on univalent prevention rather than multivalent suppression. Taking the above together, the evolution of meiotic stability after WGD in autopolyploids seems to involve either (1) multivalent prevention via reductions in genome-wide crossover rates or, (2) more rarely, univalent prevention via increases in recombination coupled with modifications of crossover placement to facilitate segregation. Previous observations are consistent with the existence of genetic multivalent suppression systems in autopolyploids that (in most cases) reduce crossover frequency, often to one per chromosome pair, and/or alter the localization of crossovers (e.g., see Shaver 1962;Hazarika and Rees 1967;Watanabe 1983;Srivastava and Lavania 1990). Crossover reduction may be a major route of autopolyploid meiotic stabilization because it reduces the likelihood of multivalent formation. This is particularly clear in the extreme example when each chromosome can form only a single crossover, in which case only bivalents can persist to metaphase. This is further supported by the observation that lower crossover rates in diploids correlate with increased meiotic stability of the neopolyploids derived from them, where diploids with low crossover rates are effectively preadapted for polyploid success (e.g., see Murray et al. 1984;Srivastava et al. 1992;Jenczewski et al. 2002). In autotetraploid Arabidopsis arenosa, selection after WGD acted on genes encoding structural proteins important for the formation of chromosome axes, crossover designation, and synapsis, suggesting that this reflects a coordinated multigenic shift in meiosis that reduces crossing over (Hollister et al. 2012;Yant et al. 2013). A reduction in crossover rates could risk the formation of unpaired univalents if there is not also assurance that at least one crossover forms per bivalent, which is needed for regular chromosome segregation (Jones and Franklin 2006). One mechanism of lowering crossovers without risking univalents and thus aneuploidy is to increase the strength or distance of crossover interference. Crossover interference prevents formation of new crossovers near previously designated ones and has been suggested as a mechanism for crossover reduction in autopolyploids (Shaver 1962;Lavania 1991). By this mechanism, a single crossover forms on each chromosome uninhibited, but additional crossovers would be suppressed. Although the molecular nature of crossover interference is not yet known, a leading theory proposes that the signal is a physical force transmitted along the chromosome axes (Zhang et al. 2014), making it especially interesting that axis components and interacting proteins are under selection in tetraploid A. arenosa (Yant et al. 2013); how these genes might contribute to multivalent suppression is an important as yet unanswered question. Polyploid mitosis Unlike with meiosis, it is not immediately obvious, when considering only genome duplication, why mitosis should be problematic for autopolyploids; it is, as discussed above, nevertheless consistently linked with aneuploidy, showing that polyploid cells clearly do face problems in mitosis. Important insights into possible underlying mechanisms came from an elegant study in yeast in which the investigators screened for mutations that were lethal to tetraploids but not diploids (Storchova et al. 2006). Among thousands tested, the investigators identified 39 tetraploid-specific lethal mutants, which collectively indicate that genes important for spindle geometry, sister chromatid cohesion, and homologous recombination are essential specifically in tetraploids (Storchova et al. 2006). The importance of spindle geometry for polyploids highlighted in the yeast study likely arises from a scaling problem. Cell and nuclear volume increase with ploidy, but spindle size does not. This mismatch leads to increased spindle attachment abnormalities that threaten the regularity of chromosome segregation in polyploid cells (Storchova et al. 2006;Storchova and Kuffer 2008). The importance of cohesion and homologous recombination for polyploids may be linked to a saturation of DNA repair due to the presence of additional DNA (Storchova et al. 2006;Storchova and Kuffer 2008). The work in yeast highlights that mitotic problems faced by polyploid cells can arise as direct byproducts of both the altered geometry and increased DNA content of polyploid cells. There are also indications that whole-organism polyploids can suffer mitotic instabilities. For example, in leaf tissues of polyploid plants, aneuploidy has been reported (e.g., see Greilhuber and Weber 1975;Wright et al. 2009). However, more direct comparisons of diploids and tetraploids are needed to conclude that this is a general trend and that it is specific to polyploidy rather than other aspects of the biology of these species. However, if the somatic aneuploidy is truly due to problems with polyploid mitosis in these plants, we expect that the challenges that cause it are likely similar to those noted above for yeast. Importantly, however, mitotic segregation problems may be excluded from plant stem cell tissues. For example, even though mitotic instability has been reported in leaves of an autotetraploid A. arenosa strain (Wright et al. 2009), wild-collected accessions consistently exhibit euploid chromosome complements (Comai et al. 2000;Hollister et al. 2012;Schmickl et al. 2012;Arnold et al. 2015). Thus, either stem cells that give rise to gametes do not become aneuploid, or any aneuploid lineages that do arise are strongly selected against during development such that aneuploid lineages do not persist to contribute to the gamete pool. This is consistent with results from plants regenerated from tissue culture, which suggest selection for euploid cells in multicellular development (Feher et al. 1989;Raja et al. 1992). Thus, it seems that the aneuploidy seen in leaves of polyploids is sporadic. An important question that remains is how such failures are excluded from tissues that ultimately form gametes. Is this purely a selective process, or is there some important difference in the way mitosis is controlled in distinct tissues that makes polyploid cells in some contexts more prone to subsequent aneuploidy than others? Reduction divisions in somatic cells A mysterious process that highlights the sometimes fluid boundaries between mitosis and meiosis is somatic reduction, which refers to the observation that some somatic polyploid cells undergo meiosis-like reduction divisions (e.g., see Huskins 1948;Wilson and Cheng 1949;Rajaraman et al. 2007). While not explicitly tested in most species, in Trillium plants, somatic reduction divisions that occur in polyploids were shown to indeed segregate homologous chromosomes from one another as in meiosis (Wilson and Cheng 1949). Likely the same is true in other species in which the progeny cells survive, suggesting that somatic reduction divisions are, at least in this fundamental way, meiosis-like. Cytological studies have provided further hints that somatic reduction divisions often show meiosis-like features: In the onion Allium cepa, chemically induced polyploid somatic cells undergo reduction divisions that exhibit meiosis-like chiasma formation (Huskins 1948). Polyploid p53-null HeLa cells also undergo reduction divisions in which structures form between homologous chromosomes that are similar to meiotic synaptonemal complexes (Ianzini et al. 2009). Just how meiosis-like somatic reduction divisions are or need to be is unclear, but support for the hypothesis that a full meiotic program is not required for reduction division comes from Candida albicans. This species lacks conventional meiosis as well as components of the crossover formation pathway that are strictly required for meiosis in close relatives (e.g., Saccharomyces cerevisiae) (Tzung et al. 2001). Nevertheless, polyploid C. albicans can diploidize via a meiosis-like program that requires the meiotic proteins SPO11, DMC1, and HOP1 (Bennett and Johnson 2003). Results to date highlight that the "bleed-over" of meiosis into mitosis can be hazardous to genome stability. For example, in A. cepa, reduction divisions are generally not well organized and frequently show chromosome missegregation (Huskins 1948). In mammals, polyploid hepatocytes undergo reduction divisions with multipolar spindles and lagging chromosomes that yield a range of aneuploid progeny cells (Duncan et al. 2010;Gentric and Desdouets 2014). In response to DNA-damaging agents or radiation, polyploid cells arising in cancers can express meiosis genes and undergo a pseudomeiotic "depolyploidization," which results in high rates of aneuploidy (Old 2001;Erenpreisa et al. 2005aErenpreisa et al. ,b, 2011Rajaraman et al. 2005Rajaraman et al. , 2007Kalejs et al. 2006;Puig et al. 2008;Ianzini et al. 2009;Salmina et al. 2010). Another situation that may or may not be relevant to polyploidy specifically is a phenomenon called "meiomitosis." Although this process is not a true reduction division, it merits discussion here because it highlights why somatic reduction divisions might commonly be unstable. The concept of "meiomitosis" comes from ob-servations that aggressive cancers with high levels of aneuploidy often express one or a few meiosis genes (for review, see Old 2001;Simpson et al. 2005;Kalejs et al. 2006;Lindsey et al. 2013). Examples of several of these genes as well as their intriguing overlap with some of the genes implicated in adaption to polyploidy in A. arenosa are listed in Box 2. The problem with chromosome segregation in meiomitosis seems to arise from the mixing of systems. Because meiosis evolved from mitosis (Hurst and Nurse 1991), many proteins are shared between the two types of division, but protein complexes generally contain at least some homologs that are either mitosis-or meiosis-specific. Why expressing mixtures of mitotic and meiotic proteins may be problematic was recently laid out in detail for the cohesin complex (Strunnikov 2013); we expect that similar stories apply to other meiosis protein heterocomplexes. Cohesins mediate sister chromatid cohesion in both meiosis and mitosis. In mitosis, the cohesins SMC1 and SMC3 associate with the kleisin Rad21, while, in meiosis, they associate with a related kleisin, Rec8. The Rec8-containing complex remains more strongly associated with sister chromatids, particularly at the centromeres. In meiosis I, this "stickiness" is crucial for retaining sister chromatid cohesion to meiosis II and preventing premature segregation of sisters, but, if aberrantly expressed in mitosis, Rec8 could prevent the timely release of cohesin from sister chromatids and thereby disrupt chromosome segregation (Strunnikov 2013). These problems would apply to both diploid and polyploid cells. Would expression of more meiosis genes in a mitotic cell better ensure regular chromosome segregation? Perhaps certain combinations would indeed help alleviate problems. For polyploid cells, however, the story may be more complex: Work in plants suggests that a diploid meiotic program is ill-suited for polyploid chromosome segregation, and stabilization likely requires a coordinated evolutionary shift in multiple interacting genes (Hollister et al. 2012;Yant et al. 2013). Much remains to be learned about somatic reduction. For example, what meiotic genes minimally suffice to drive somatic reduction? Does stability of these divisions correlate with the number of meiosis genes expressed? Are somatic reduction divisions aberrations, or can they be important in normal development? Does somatic reduction ever provide a reliable remedy for the normally irreversible fate of somatic endopolyploidy? Conclusions Results from a wide range of eukaryotes clearly show that WGD often provides adaptive opportunities. However, in those cases where polyploid cells continue to divide, they face substantial challenges, especially for the regular segregation of chromosomes. This can lead to chromosome instability and aneuploidy, which can sometimes be adaptive at the level of cell lineages but appears in most cases to be deleterious (or, at best, neutral) for the organism at large. Thus, the regulated management of cellular genome content plays important and beneficial roles in development, tissue repair, and stress responses, while its mismanagement can lead to genome instability and contribute to tissue aging and pathologic states, including cancer progression. Viewing proliferating polyploid cell lineages from an evolutionary and comparative perspective may yield novel insights into the role that the double-edged sword of polyploidy plays in the biology of organisms and their evolution. Many open questions remain, such as understanding the mysterious process and developmental role (if any) of somatic reduction divisions, the role that aneuploidy may play in normal development or stress resilience (if any), and the causes and consequences of expressing partial meiotic programs in somatic cells. Furthermore, many potentially interesting effects are currently only correlated with polyploidy, and more work is required to test causality. Learning which effects are direct outcomes of polyploidization itself and their mechanistic basis has the potential to provide important insights. Where similar correlates are observed across kingdoms, deeper investigation of the underlying causes for the apparent similarities may yield novel insights into the most fundamental effects that polyploidy has on the biology of cells, both individually and in the context of the multicellular organisms in which they are found. Box 2. Other meiosis proteins expressed in cancer cells REC8: A kleisin important for preventing premature sister chromatid separation in meiosis I. REC8 tethers the cohesin complexes to the centromeres of sister chromatids, effectively gluing them together until meiosis II. Its expression in mitosis could cause chromosome missegregation by preventing proper sister chromatid separation (Ishiguro et al. 2010;Lindsey et al. 2013;Strunnikov 2013). REC8 may also drive depolyploidization in polyploid cancer cells by promoting reductional divisions (Kalejs et al. 2006). A homolog of REC8 was also under selection in a tetraploid plant lineage, suggesting a role in stabilizing post-WGD meiotic chromosome segregation (Yant et al. 2013). DMC1: Together with a related protein, Rad51, DMC1 helps coordinate early events in homologous recombination. DMC1 is overexpressed in several cancers, and targeting its expression in cell culture has been effective in reducing the proliferation and aneuploidy of glioblastoma cells, while it has no effect on nonneoplastic cells (Rivera et al. 2015). Intracranial implantation of glioblastoma cells with knocked down DMC1 levels into immunocompromised mice produced smaller tumors than control cells that express DMC1 (Rivera et al. 2015). In humans, there may be a similar effect that is more dependent on the levels of associated proteins. Reducing the expression of a partner of DMC1, RAD51, sensitizes glioblastoma cells to radiation (Short et al. 2011). Interestingly, while increased DMC1 levels do not lead to negative prognosis in glioblastoma, increased HOP2 and MND1, which are necessary for DMC1-RAD51 to bind to DNA, is correlated with poor survival in The Cancer Genome Atlas (Rivera et al. 2015). SCP1: During meiosis, the axes of homologous chromosomes are bridged and brought closer by the formation of a zipper-like proteinaceous structure called the synaptonemal complex (SC). The central elements of the SC are SCP1 in humans (Meuwissen et al. 1997), Zip1 in yeast (Sym et al. 1993), C[3]G in Drosophila (Page and Hawley 2001), and ZYP1 in Arabidopsis (Higgins et al. 2005). SCP1 expression in fibroblast cells yields SC-like structures, suggesting that, despite the absence of axial element proteins, SCP1 can sometimes form a SClike structure on its own when expressed in somatic cells (Öllinger et al. 2005). SCP1 in humans was the first of the cancer-expressed meiosis genes identified (Türeci et al. 1998). In Drosophila, upregulation of germline genes, including the SC central element C[3]G, is important for brain tumor development (Janic et al. 2010). SC formation could cause inappropriate associations of homologs in mitosis that may not be properly resolved if other meiotic proteins are lacking. Interestingly, the SC central element ZYP1 in A. arenosa also shows strong evidence of having been under selection in the tetraploid lineage, suggesting a role for the SC in stabilizing polyploid chromosome segregation (Yant et al. 2013). HORMAD1 and HORMAD2, an axis of evil? Meiotic HORMA proteins form linear structures along unsynapsed sets of sister chromatids, help mediate crossing over and the synapsis of homologs (Hollingsworth et al. 1990;Armstrong et al. 2002;Niu et al. 2005;Fukuda et al. 2010;Shin et al. 2010;Daniel et al. 2011), and promote use of the homolog rather than the sister chromatid for double-strand break repair (Schwacha and Kleckner 1994;Niu et al. 2005). In humans, there are two paralogous meiotic HORMA proteins, HORMAD1 and HORMAD2, both of which have been reported to be expressed in aggres-sive tumors, although not together (Aung et al. 2005;Chen et al. 2005;Liu et al. 2012;Shahzad et al. 2013). At least for HORMAD1, there is evidence that this expression is dangerous: In vitro siRNA silencing of HORMAD1 in ovarian cancer cells decreases their aggressiveness and metastatic potential (Shahzad et al. 2013), and its expression directly contributes to genome instability and aneuploidy in breast cancer cells (Watkins et al. 2015). The latter is apparently due to its usual meiotic role in promoting the use of the homolog as a double-strand break repair template rather than the sister chromatid by suppressing Rad51-mediated double-strand break repair. In the absence of the meiotic recombination machinery, when the Rad51 pathway is blocked, doublestrand break repair is instead shuttled to the error-prone nonhomologous endjoining (NHEJ) pathway, resulting in genome instability (Watkins et al. 2015). Recent work with the yeast homolog Hop1 suggests that purified Hop1 protein can self-associate to form rigid rod-like structures that tightly unite DNA molecules independent of homology (Khan et al. 2012). This finding supports the idea that when these proteins are aberrantly expressed in cells that lack the proteins necessary to subsequently remove them (e.g., Chen et al. 2014;Lambing et al. 2015), they may make chromosomes "sticky," driving aberrant interactions and missegregation. Recently, we found that a unique allele of ASY1, the A. thaliana homolog of HORMAD1/ HORMAD2 and Hop1 (Armstrong et al. 2002), underwent a dramatic selective sweep following WGD in A. arenosa (Hollister et al. 2012;Yant et al. 2013). Whether the chromosome "stickiness" induced by diploid versions of these proteins affects chromosome segregation in cancer cell lineages and polyploid meiosis in similar ways remains to be tested.
2017-11-06T20:21:57.996Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "c0f48ab759a2842887b97b0850440a7634ea20e0", "oa_license": "CCBYNC", "oa_url": "http://genesdev.cshlp.org/content/29/23/2405.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ee6723fb6f460a3725c6e105c25a6b8703d5785", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269362827
pes2o/s2orc
v3-fos-license
Symmetries of spatial correlators of light and heavy mesons in high temperature lattice QCD The spatial $z$-correlators of meson operators in $N_f=2+1+1$ lattice QCD with optimal domain-wall quarks at the physical point are studied for seven temperatures in the range of 190-1540 MeV. The meson operators include a complete set of Dirac bilinears (scalar, pseudoscalar, vector, axial vector, tensor vector, and axial-tensor vector), and each for six flavor combinations ($\bar u d$, $\bar u s$, $\bar s s$, $\bar u c$, $\bar s c$, and $\bar c c$). In Ref. \cite{Chiu:2023hnm}, we focused on the meson correlators of $u$ and $d$ quarks, and discussed their implications for the effective restoration of $SU(2)_L \times SU(2)_R$ and $U(1)_A$ chiral symmetries, as well as the emergence of approximate $SU(2)_{CS}$ chiral spin symmetry. In this work, we extend our study to meson correlators of six flavor contents, and first observe the hierarchical restoration of chiral symmetries in QCD, from $SU(2)_L \times SU(2)_R \times U(1)_A $ to $SU(3)_L \times SU(3)_R \times U(1)_A $, and to $SU(4)_L \times SU(4)_R \times U(1)_A $, as the temperature is increased from 190 MeV to 1540 MeV. Moreover, we compare the temperature windows for the emergence of the approximate $SU(2)_{CS}$ symmetry in light and heavy vector mesons, and find that the temperature windows are dominated by the $(\bar u c, \bar s c, \bar c c)$ sectors. I. Introduction Understanding the nature of strongly interacting matter at high temperatures is crucial for uncovering the mechanisms governing matter creation in the early universe and elucidating the outcomes of relativistic heavy ion collision experiments such as those at LHC and RHIC, as well as those of electron ion collision experiments at the planned electron-ion colliders.A first step in this pursuit is to find out the symmetries in Quantum Chromodynamics (QCD) at high temperatures, which are essential in determining the properties and dynamics of matter under extreme conditions.First, consider QCD with N f massless quarks.Its action possesses the SU (N f ) L ×SU (N f ) R × U (1) A chiral symmetry.At low temperatures T < T 0 c (where T 0 c depends on N f , and the superscript "0" denotes zero quark mass), quarks and gluons are confined in hadrons, and the SU (N f ) L × SU (N f ) R chiral symmetry is spontaneously broken down to SU (N f ) V by the vacuum of QCD, with nonzero chiral condensate.Moreover, the U (1) A axial symmetry such that the theory possesses the SU (N f ) L ×SU (N f ) R ×U (1) A chiral symmetry for T > T 0 c1 . Next, consider QCD with physical (u, d, s, c, b) quarks.Its action does not possess the SU (N ) L ×SU (N ) R ×U (1) A chiral symmetry for any integer N from 2 to 5, due to the explicit breakings of the nonzero quark masses.However, as T is increased successively, each quark acquires thermal energy of the order of πT , and eventually its rest mass energy becomes negligible when πT ≫ m q .Also, since the quark masses range from a few MeV to a few GeV, it follows that as the temperature is increased successively, the chiral symmetry is restored hierarchically from SU (2) L × SU (2) R × U (1) A of (u, d) quarks to SU (3) L × SU (3) R × U (1) A of (u, d, s) quarks, then to SU (4) L × SU (4) R × U (1) A of (u, d, s, c) quarks, and finally to SU (5) L × SU (5) R × U (1) A of (u, d, s, c, b) quarks.Since the restoration of chiral symmetries is manifested by the degeneracies of meson z-correlators (as well as other observables), we can use the splittings of the meson z-correlators of the symmetry multiplets to examine the realization of the hierarchical restoration of chiral symmetries in high temperature QCD. Strictly speaking, these chiral symmetries should be regarded as "emergent" symmetries rather than "restored" symmetries, since the QCD action with physical quark masses does not possess chiral symmetries at all.In the following, it is understood that "restoration of chiral symmetries" stands for "emergence of chiral symmetries".Similar to (1), we define where T qQ c (T qQ 1 ) is the temperature for the manifestation of SU (2) L × SU (2) R (U (1) A ) chiral symmetry via the meson z-correlators with flavor content qQ.Then, for T > T qQ c1 , the theory possesses the SU (2) L × SU (2) R × U (1) A chiral symmetry of the qQ sector. Note that since 1987 [2], there have been many lattice studies using the screening masses of meson z-correlators to investigate the effective restoration of U (1) A and SU (2) L × SU (2) R chiral symmetries of u and d quarks in high temperature QCD, see, e.g., Ref. [3] and references therein.However, so far, there seems no discussions in the literature about the hierarchical restoration of chiral symmetries in high temperature QCD, except for a brief mention in Ref. [1]. In this work, we investigate the hierarchical restoration of chiral symmetries in N f = 2+1+1 lattice QCD with optimal domain-wall quarks at the physical point.We first observe the hierarchical restoration of chiral symmetries from SU (2) L ×SU (2) R ×U (1) A of (u, d) quarks, to SU (3) L × SU (3) R × U (1) A of (u, d, s) quarks, and finally to SU (4) L × SU (4) R × U (1) A of (u, d, s, c) quarks, as the temperature is increased from 190 MeV to 1540 MeV.We compute the meson z-correlators for a complete set of Dirac bilinears (scalar, pseudoscalar, vector, axial vector, tensor vector, and axial-tensor vector), and each for six combinations of quark flavors (ūd, ūs, ss, ūc, sc, and cc).Then we use the degeneracies of meson z-correlators to investigate the hierarchical restoration of chiral symmetries in high temperature QCD. The relationship between the SU (2) L × SU (2) R and U (1) A chiral symmetries and the degeneracy of meson z-correlators for (u, d) quarks in N f = 2 + 1 + 1 QCD has been outlined in Ref. [1], and we follow the same conventions/notations therein.In this study, following Ref.[1], we also neglect the disconnected diagrams in the meson z-correlators.With this approximation, one can straightforwardly deduce the relationship between the SU (N ) L × SU (N ) R and U (1) A chiral symmetries of N (2 ≤ N ≤ N f ) quarks and the degeneracy of meson z-correlators, in QCD with N f quarks, as follows.The restoration of SU (N ) L × SU (N ) R chiral symmetry of N quarks is manifested by the degeneracies of meson z-correlators in the vector and axial-vector channels, C qi q j V k (z) = C qi q j A k (z), (k = 1, 2, 4), for all flavor combinations (q i q j , i, j = 1, • • • , N ).The effective restoration of the U (1) A symmetry of N quarks is manifested by the degeneracies of meson z-correlators in the pseudoscalar and scalar channels, C qi q j P (z) = C qi q j S (z), as well as in the tensor vector and axial-tensor vector channels, C qi q j T k (z) = C qi q j X k (z), (k = 1, 2, 4), for all flavor combinations (q i q j , i, j = 1, • • • , N ).At this point, we recall the studies of the symmetries and meson correlation functions in high temperature QCD with N f massless quarks [4,5], in which one the salient results is that the correlator of the flavor non-singlet pseudoscalar meson qγ 5 λ a q is equal to that of the flavor singlet pseudoscalar meson qγ 5 q, for QCD with N f > 2 at T > T c .This implies that the disconnected diagrams do not have contributions to meson z-correlators in QCD with N f > 2 massless quarks at T > T c .However, at this moment, it is unknown to what extent the disconnected diagrams are suppressed in QCD with N f = 2 + 1(+1)(+1) physical quarks.We will address this question with noise estimation of all-to-all quark propagators, and will report our results in the future. Besides the hierarchical restoration of chiral symmetries, we are also interested in the question whether there are any (approximate) emergent symmetries which are not the symmetries of the entire QCD action but only a part of it, e.g., the SU (2) CS chiral spin symmetry (with U (1) A as a subgroup) [6,7], which is only a symmetry of chromoelectric part of the quark-gluon interaction, and also the color charge.Since the free fermions as well as the chromomagetic part of the quark-gluon interaction do not possess the SU (2) CS symmetry, its emergence in high temperature QCD suggests the possible existence of hadron-like objects which are predominantly bound by chromoelectric interactions.The SU (2) CS symmetry was first observed to manifest approximately in the multiplets of z-correlators of vector mesons, at temperatures T ∼ 220 − 500 MeV in N f = 2 lattice QCD with domain-wall fermions [8]. In Ref. [1], we studied the emergence of SU (2) CS chiral-spin symmetry in N f = 2 + 1 + 1 lattice QCD with optimal domain-wall quarks at the physical point, and found that the SU (2) CS symmetry breaking in N f = 2 + 1 + 1 lattice QCD is larger than that in N f = 2 lattice QCD at the same temperature, for both z-correlators and t-corralators of vector mesons of u and d quarks.In this paper, we extend our study to vector meson z-correlators of all flavor combinations (ūd, ūs, ss, ūc, sc, cc) in N f = 2 + 1 + 1 lattice QCD at the physical point, and compare the emergence of approximate SU (2) CS chiral spin symmetry between different flavor sectors. The outline of this paper is as follows.In Sec.II, the hybrid Monte-Carlo simulation of N f = 2 + 1 + 1 lattice QCD with optimal domain-wall quarks at the physical point is briefly outlined, and the essential features and parameters of the seven gauge ensembles for this study are summarized.In Sec.III, the symmetry-breaking parameters for measuring the precision of various symmetries with the splittings of the z-correlators of the symmetry partners are defined.The results of meson z-correlators for six flavor combinations and seven temperatures in the range of 190-1540 MeV are presented in Sec.IV, while the corresponding results of symmetry-breaking parameters are presented in Sec.V.The realization of hierarchical restoration of chiral symmetries in A , and to SU (4) L ×SU (4) R ×U (1) A , as the temperature is increased from 190 MeV to 1540 MeV, is demonstrated in Sec.V A. The temperature windows for the approximate SU (2) CS symmetry of six flavor combinations are presented in Sec.V B, which reveal the dominance of heavy vector meson channels of (ūc, sc, cc) sectors.In Sec.VI, we conclude with some remarks. II. Gauge ensembles The gauge ensembles in this study are generated by hybrid Monte-Carlo (HMC) simulation of lattice QCD with N f = 2 + 1 + 1 optimal domain-wall quarks [9] at the physical point, on the 32 3 × (16, 12, 10, 8, 6, 4, 2) lattices, with the plaquette gauge action at β = 6/g 2 = 6.20.This set of ensembles are generated with the same actions [10,11] and alogrithms as their counterparts on the 64 3 × (20, 16, 12, 10, 8, 6) lattices [12], but with one-eighth of the spatial volume.The simulations are performed on a GPU cluster with various Nvidia GPUs.For each ensemble, after the initial thermalization, a set of gauge configurations are sampled and distributed to 16-32 simulation units, and each unit performed an independent stream of HMC simulation.For each HMC stream, one configuration is sampled every 5 trajectories.Finally collecting all sampled configurations from all HMC streams gives the total number of configurations of each ensemble.The lattice parameters and statistics of the gauge ensembles for computing the meson z-correlators in this study are summarized in Table I.The temperatures of these six ensembles are in the range ∼ 190 − 1540 MeV, all above the pseudocritical temperature T c ∼ 150 MeV. The lattice spacing and the (u/d, s, c) quark masses are determined on the the 32 3 × 64 lattices with 427 configurations.The lattice spacing is determined using the Wilson flow [13,14] with the condition {t 2 ⟨E(t)⟩}| t=t 0 = 0.3 and the input √ t 0 = 0.1416 (8) fm [15].The physical (u/d, s, c) quark masses are obtained by tuning their masses such that the masses of the lowest-lying states extracted from the time-correlation functions of the meson operators {ūγ 5 d, sγ i s, cγ i c} are in good agreement with the physical masses of π ± (140), ϕ(1020), and J/ψ(3097).The chiral symmetry breaking due to finite N s = 16 (in the fifth dimension) can be measured by the residual mass of each quark flavor [16], as given in the last three columns of Table I.The residual masses of (u/d, s, c) quarks are less than (1.5%, 0.04%, 0.001%) of their bare masses, amounting to less than (0.06, 0.05, 0.02) MeV/c 2 respectively.This asserts that the chiral symmetry is well preserved such that the deviation of the bare quark mass m q is sufficiently small in the effective 4D Dirac operator of optimal domain-wall fermion, for both light and heavy quarks.In other words, the chiral symmetry in the simulations are sufficiently precise to guarantee that the hadronic observables (e.g., meson correlators) can be evaluated to high precision, with the associated uncertainty much less than those due to statistics and other systematics. III. Symmetry breaking parameters In order to give a quantitative measure for the manifestation of symmetries from the degeneracy of meson z-correlators with flavor content qQ, we consider the symmetry breaking parameters as follows.To this end, we write the meson z-correlators as functions of the dimensionaless variable where T is the temperature. In general, the degeneracy of any two meson z-correlators C A (zT ) and C B (zT ) with flavor content qQ (where subscripts A and B denote their Dirac matrices with definite transformation properties, and the flavor content qQ is suppressed) can be measured by the symmetry breaking parameter If C A and C B are exactly degenerate at T , then κ AB = 0 for any z, and the symmetry is effectively restored at T .On the other hand, if there is any discrepancy between C A and C B at any z, then κ AB is nonzero at this z, and the symmetry is not exactly restored at T . Here the denominator of (4) serves as (re)normalization and the value of κ AB is bounded between zero and one.Obviously, this criterion is more stringent than the equality of the screening masses, m scr A = m scr B , which are extracted from C A and C B at large z. Note that κ AB in (4) can be written as [1].Also, all z-correlators in (4), as well as those shown in Figs.1-7 are unnormalized, while those in Ref. [1] are normalized by their values at z/a = 1 (i.e., C Γ (zT ) = 1 at z/a = 1).The former avoids any "accidental" degeneracies due to the normalization.In the following, any symmetry breaking parameter to measure the degeneracy of two meson z-correlators is always defined according to (4). A. SU (2) L × SU (2) R and U (1) symmetry breaking parameters According to (4), the SU (2) L × SU (2) R symmetry breaking parameter can be written as Due to the S 2 symmetry of the z-correlators, it only needs to examine k = 1 and k = 4 components of (5).In general, the difference between k = 1 and k = 4 components of ( 5) is negligible, thus in the following, we only give the results of (5) with k = 1. In general, to determine to what extent the SU (2) L × SU (2) R chiral symmetry is manifested in the z-correlators, it is necessary to examine whether κ V A is sufficiently small.To this end, we use the following criterion for the manifestation of SU (2) L × SU (2) R chiral symmetry at where ϵ V A is a small parameter which defines the precision of the chiral symmetry.For fixed zT and ϵ V A , the temperature T c for the manifestation of the SU (2) L × SU (2) R symmetry is the lowest temperature satisfying (6), i.e., In this study, we set ϵ V A to two different values, 0.05 and 0.01, to study how T c depends on For the U (1) A symmetry breaking, it can be measured by the z-correlators in the pseudoscalar and scalar channels, with as well as in the tensor vector and axial-tensor vector channels, with Due to the S 2 symmetry of the z-correlators, it only needs to examine k = 1 and k = 4 components of (9).In practice, the difference between k = 1 and k = 4 components of (9) is almost zero, up to the statistical uncertainties, thus in the following, we only give the results of (9) with k = 4. Similar to (6), we use the following criterion for the manifestation of U (1) A symmetry at T where ϵ T X is a small parameter which defines the precision of U (1) A symmetry.For fixed zT and ϵ T X , the temperature T 1 for the manifestation of U (1) A symmetry is the lowest temperature satisfying (10), i.e., In this study, we set ϵ T X to two different values, 0.05 and 0.01, to study how the temperature of restoration of U (1) A symmetry depends on ϵ T X . Next, consider QCD with As discussed in Sec. I, upon neglecting the disconnected diagrams in the meson z-correlators, the quarks is manifested by the degeneracies of meson z-correlators in the vector and axial-vector channels, ), for all flavor combinations of N quarks (q i q j , i, j = 1, • • • , N ).Thus, to determine the temperature T c for the manifestation of the SU (N ) L × SU (N ) R chiral symmetry of N quarks, it needs to measure κ qi q j V A for all flavor combinations of N quarks, and check whether they all satisfy the criterion (6) for fixed zT and ϵ V A .This amounts to finding the largest T qi q j c satisfying ( 6) among all flavor combinations of N quarks, i.e., About the U (1) A chiral symmetry of N (2 ≤ N ≤ N f ) quarks, upon neglecting the disconnected diagrams in the meson z-correlators, it is manifested by the degeneracies of meson z-correlators in the pseudoscalar and scalar channels, C qi q j P (z) = C qi q j S (z), as well as in the tensor vector and axial-tensor vector channels, C qi q j T k (z) = C qi q j X k (z), (k = 1, 2, 4), for all flavor combinations of N quarks (q i q j , i, j = 1, • • • , N ).Thus, to determine the temperature T 1 for the manifestation of the U (1) A symmetry via the k = 4 component of the tensor vector and axial-tensor vector channels, it needs to measure κ qi q j T X for all flavor combinations of N quarks, and check whether they all satisfy the criterion (10) for fixed zT and ϵ T X .This amounts to finding the largest T qi q j 1 satisfying (10) among all flavor combinations of N quarks, i.e., B. SU (2) CS symmetry breaking and fading parameters Following the discussion and notations in Ref. [1], the SU (2) CS multiplets for the zcorrelators with flavor content qQ are where the "2" components due to the S 2 symmetry have been suppressed.Thus the degeneracies in the above triplets signal the emergence of SU (2) CS chiral spin symmetry. For T ≥ T qQ c1 , the SU (2) L × SU (2) R × U (1) A chiral symmetry is effectively restored, and , and the multiplets in Eqs. ( 14) and ( 15) become: This suggests the possibility of a larger symmetry group SU (4) for T > T qQ c1 which contains CS as a subgroup.For the full SU (4) symmetry, each of the multiplets in Eqs. ( 16) and ( 17) is enlarged to include the flavor-singlet partners of A k , T k and X k , while the flavor-singlet partners of V 1 and V 4 are SU (4) singlets, i.e., where the superscript "0" denotes the flavor singlet. In general, to examine the emergence of SU (2) CS symmetry, one needs to measure the splittings in both (A 1 , X 4 ) and (T 4 , X 4 ) of ( 14).To measure the splitting of A 1 and X 4 , we use while the splitting of T 4 and X 4 is measured by κ T X (9) with k = 4. Then we use the maximum of κ AT and κ T X to measure the SU (2) CS symmetry breaking, with the parameter Note that for (ūd, ūs, ss, ūc) sectors, κ AT (zT ) > κ T X (zT ) for all z and the seven temperatures in this study, thus κ CS = κ AT . As the temperature T is increased, the separation between the multiplets of SU (2) CS and U (1) A is decreased.Therefore, at sufficiently high temperatures, the U (1) A multiplet M 0 = (P, S) and the SU (2 then the approximate SU (2) CS symmetry becomes washed out, and only the SU (2 ) never merges with M 0 and M 2 even in the limit T → ∞, as discussed in Ref. [1].Thus M 4 is irrelevant to the fading of the approximate SU (2) CS symmetry. Here we use the SU (2) CS symmetry fading parameter similar to that defined in Ref. [1], except for taking the absolute value and using the unnormalized z-correlators, i.e., where In general, κ(zT ) behaves like an increasing function of T for a fixed zT .If κ(zT ) ≪ 1 for a range of T , then the approximate SU (2) CS symmetry is well-defined for this window of T . On the other hand, if κ(zT ) > 0.3 for T > T f , then the approximate SU (2) CS symmetry is regarded to be washed out, and only the U (1) A × SU (2) L × SU (2) R chiral symmetry remains.Thus, to determine to what extent the SU (2) CS symmetry is manifested in the z-correlators, it is necessary to examine whether both κ(zT ) and κ CS (zT ) are sufficiently small.For a fixed zT , the following condition serves as a criterion for the approximate SU (2) CS symmetry in the z-correlators, where ϵ cs is for the SU (2) CS symmetry breaking, while ϵ f cs for the SU (2) CS symmetry fading.For fixed zT , (23) gives a window of T for the approximate SU (2) CS symmetry.Obviously, the size of this window depends on ϵ cs and ϵ f cs .That is, larger ϵ cs or ϵ f cs gives a wider window of T , and conversely, smaller ϵ cs or ϵ f cs gives a narrower window of T . IV. Meson z-correlators of (ūd, ūs, ss, ūc, sc, cc) Following the prescription proposed in Ref. [1] for the cancellation of the contribution of unphysical meson states to the z-correlators, we compute two sets of quark propagators with periodic and antiperiodic boundary conditions in the z direction, while their boundary FIG. 4. The spatial z-correlators of meson interplotors for six flavor combinations (ūd, ūs, ss, ūc, sc, and cc) in N f = 2 + 1 + 1 lattice QCD at T ≃ 385 MeV. -10 FIG. 6.The spatial z-correlators of meson interplotors for six flavor combinations (ūd, ūs, ss, ūc, sc, and cc) in N f = 2 + 1 + 1 lattice QCD at T ≃ 770 MeV.-14 The spatial z-correlators of meson interplotors for six flavor combinations (ūd, ūs, ss, ūc, sc, and cc) in N f = 2 + 1 + 1 lattice QCD at T ≃ 1540 MeV.-18 -18 -18 conditions in (x, y, t) directions are the same, i.e., periodic in the (x, y) directions, and antiperiodic in the t direction.Each set of quark propagators are used to construct the z correlators independently, and finally taking the average of these two z correlators.Then, the contribution of unphysical meson states to the z correlators can be cancelled configuration by configuration, up to the numerical precision of the quark propagators. In each of Figs.1-7, the z-correlators for six flavor contents (ūd, ūs, ss, ūc, sc, cc) at the same T are plotted as a function of the dimensionless variable zT (3).Due to the degeneracy (the S 2 symmetry) of the "1" and "2" components in the z correlators of vector mesons, only the "1" components are plotted.In general, each panel plots ten C Γ (zT ) For the classification and notations of meson interpolators, see Table II.Vector (V k ) Tensor vector (T k ) For any flavor combination, if the SU (2) L × SU (2) R chiral symmetry is restored, then its ) and (V 4 , A 4 ) become degenerate, and the number of distinct z-correlators appears to be reduced to eight.Furthermore, if the U (1) A symmetry is also restored, then its (P, S), (T 4 , X 4 ) and (T 1 , X 1 ) also become degenerate, and the number of distinct z-correlators is further reduced to five.Thus one can visualize the effective retoration of SU (2) L ×SU (2) R × U (1) A chiral symmetry when the number of distinct z-correctors becomes five.This provides a simple guideline to look for the restoration of chiral symmetry from the panels in Figs. In Fig. Next we look at the ss panels in Figs.1-7.In Fig. 2, at T = 257 MeV, it appears to have five distinct z-correlators in the channels of (P, S), (V 1 , A 1 ), (T 4 , X 4 ), (V 4 , A 4 ) and (T 1 , X 1 ), in spite of the small splittings at large z in the channels of (V 4 , A 4 ) and (T 1 , X 1 ). Thus the SU (2) L × SU (2) R × U (1) A chiral symmetry of ss can be regarded to be restored at T ss c1 ∼ 257 MeV.This implies that the SU (3 This implies that T sc c1 is in the range of 385-512 MeV.In general, a more precise estimate of T c and T 1 can be obtained by the criteria ( 6) and ( 10), which will be given in the next section. Finally, we look at the cc panels in Figs.1-7.The SU (2) L × SU (2) R × U (1) A chiral symmetry of cc seems to manifest at T = 770 MeV, and it becomes highly pronounced at T = 1540 MeV.This implies that T cc c1 is in the range of 770-1540 MeV, and also the restoratrion of the SU (4) L × SU (4) R × U (1) A chiral symmetry of (u, d, s, c) quarks at T cc c1 ∼ 770-1540 MeV, since the SU (2) L × SU (2) R × U (1) A chiral symmetry in other sectors (ūd, ūs, ss, ūc, sc) has already been restored at lower temperatures.This gives the second step of the hierarchical restoration of chiral symmetries in N f = 2 + 1 + 1 lattice QCD at the physical point, from the restoration of the SU (3) L × SU (3) R × U (1) A chiral symmetry of (u, d, s) quarks at T ss c1 ∼ 257 MeV to the restoratrion of SU (4) L × SU (4) R × U (1) A chiral symmetry of (u, d, s, c) quarks at T cc c1 ∼ 770 − 1540 MeV.A more precise estimate of T cc c and T barcc 1 can be obtained by the criteria ( 6) and ( 10), which will be given in the next subsection. Besides the hierarchical restoration of chiral symmetries, we are also interested in visually identifying the emergence of the approximate SU (2) CS chiral spin symmetry in each of the six flavor sectors.To this end, we look for the appearance of three approximately distinct multiplets M 0 = (P, S), which become more pronounced at higher temperatures, and they are in the order The emergence of M 2 and M 4 is in agreement with the SU (2) CS multiplets of ( 14) and (15), and the SU (2) CS × SU (2) L × SU (2) R multiplets of ( 16) and ( 17).This suggests the emergence of the approximate SU (2) CS and SU (4) symmetries.Moreover, the separation between the multiplets M 2 and M 0 is decreased as the temperature is increased further. Thus, at sufficiently high temperatures, say T > T f , M 2 and M 0 merges together to form a single multiplet, then the approximate SU (2) CS symmetry becomes washed out, and only the SU (2) L × SU (2) R × U (1) A chiral symmetry remains.In other words, the approximate SU (2) CS symmetry can only appear in a window of T above T c1 , i.e., T c1 < T cs ≲ T ≲ T f , where T cs (T f ) depends on ϵ cs (ϵ f cs ) in the criterion (23) for the emergence (fading) of the approximate SU (2) CS symmetry.Note that the multiplet M 4 never merges with the multiplets M 0 and M 2 , even in the limit T → ∞, as discussed in Ref. [1].Thus M 4 is irrelevant to the fading of the approximate SU (2) CS symmetry.The above provides a guideline to look for the emergence and the fading of the approximate SU (2) CS symmetry in Figs.1-7. First, we look at the panels of ūd and ūs in Figs.1-7.We see that their z-correlators are almost identical for all seven temperatures.Furthermore, as T is increased from 192 MeV to 770 MeV, we see the emergence of three approximately distinct multiplets M 0 , M 2 , and M 4 , which become more pronounced at higher temperatures, while the separation of M 0 and M 2 become smaller.This suggests the emergence of the approximate SU (2) CS and SU (4) symmetries in the window T ∼ 308-770 MeV, for both ūd and ūs sectors.Finally, at T = 1540 MeV, M 0 and M 2 (for any flavor combination) merge together to form a single multiplet, and the approximate SU (2) CS symmetry has become completely washed out, and only the chiral symmetry remains. Next, from the ss panels in Figs.1-7, we see that its window for the approximate SU (2) CS symmetry is almost the same as that of ūd and ūs, i.e., T ∼ 308-770 MeV. Finally, we visually estimate the windows of the approximate SU (2) CS symmetry for heavy mesons with the c quark, which seem to be simlar to that of the light mesons.However, if one performs a more precise estimate with the criterion (23), one can reveal some salient features of the heavy vector mesons which cannot be easily observed by visual estimate, as shown in the next section. V. Symmetry breaking parameters of (ūd, ūs, ss, ūc, sc, cc) In this section, we use the criteria ( 6), (10) At each T , and for fixed zT , the chiral symmetry breakings due to the quark masses of the meson operator can be seen clearly from κ V A , κ P S , and κ T X , in the order of for each channel of α = (V A, P S, T X).Also, for each flavor content, κ α (zT ) at fixed zT is a monotonic decreasing function of T .Note that for the charmonium cc, the chiral symmetry breakings at T = 1540 are still not negligible, e.g., at zT = 4, 0.02 About the SU (2) CS symmetry breaking parameter κ CS = max(κ AT , κ T X ), for any flavor combination, it is a monotonic decreasing function of T at fixed zT , since both κ AT (zT ) and κ T X (zT ) are monotonic decreasing function of T .However, the flavor dependence of κ CS turns out to be rather nontrivial, and it is temperature dependent.Similarly, the flavor depenedence of the SU (2) CS symmetry fading parameter κ is also temperature dependent. Nevertheless, it is interesting to point out that κ CS of the ūc sector is the smallest among all flavor sectors, while κ is almost the same for all flavor sectors, for all seven temperatures in the range of 190-1540 MeV.This suggests that the most attractive vector meson channels to detect the emergence of approximate SU (2) CS symmetry are in the ūc sector.This will be addressed more quantitatively in the subsection V B, in terms of the window of T for the approximate SU (2) CS symmetry. A. Hierarchical restoration of chiral symmetries Now we proceed to investigate the restoration of chiral symmetries in N f = 2 + 1 + 1 lattice QCD at the physical point.We use the criteria ( 6) and ( 10) to obtain T c and T 1 for each flavor combination.To this end, we collect the data of κ V A (zT ) and κ T X (zT ) at the same zT = (0.5, 1, 2), and plot them as a function of T , as shown in Figs. 15 and 16.According to (24), it follows that for any ϵ V A in ( 6) and any ϵ T X in (10), the flavor dependence of T c which immedidately gives Equations ( 25)-( 27) are the first results of lattice QCD.They immediately give the hierarachic restoration of chiral symmetries in N f = 2 + 1 + 1 QCD, i.e., from the restoration of SU (2) L × SU (2) R × U (1) A chiral symmetry of (u, d) quarks at T ūd c1 to the the restoration of SU (3) L × SU (3) R × U (1) A chiral symmetry of (u, d, s) quarks at T ss c1 > T ūd c1 , then to the restoration of SU (4) L × SU (4) R × U (1) A chiral symmetry of (u, d, s, c) quarks at T cc c1 > T ss c1 . In the following, we demonstrate the hierarchical restoration of chiral symmetries explicitly, for ϵ V A = (0.05, 0.01) and ϵ T X = (0.05, 0.01) respectively.In Tables III and IV, for both ūd and ūs sectors, both T c and T 1 are less than 190 MeV, for any combinations of ϵ V A = (0.05, 0.01), ϵ T X = (0.05, 0.01), and zT = (0.5, 1, 2).For these cases, A is restored at a temperature lower than 190 MeV, for both ūd and ūs sectors, However, for the ūc sector, only for A is restored at a temperature lower than 190 MeV. 5) 235( 5) 320( 5) 255( 10) 350( 10) T sc 1 335( 5) 730( 5) 375( 5) 800( 5) 400( 5) 790( 5) T cc 1 835( 5) 1610( 10) 875( 5) 1395( 5) 865( 5) 1420( 5) Now we investigate the hierarchical restoration of chiral symmetries with ϵ V A = ϵ T X = 0.05 and zT = 1.From Tables III and IV Next, we study how T c (T 1 ) depends on ϵ V A (ϵ T X ).Since κ q1 q 2 V A (κ q1 q 2 T X ) at fixed zT is a monotonic decreasing function of T , it follows that T c (T 1 ) is monotonically increased as ϵ V A (ϵ T X ) is decreased (i.e., the precision of the chiral symmetry becomes higher).For example, if we set ϵ V A = ϵ T X = 0.01, then at zT = 1, the SU (ūd, ūs, ss) and heavy mesons (ūc, sc, cc) respectively.In general, for any flavor content, at fixed zT , κ CS is a monotonic decreasing function of T , while κ is a monotonic increasing function of T .Thus, for any ϵ cs and ϵ f cs , the window of T satisfying the criterion (23) can be determined.Note that, if ϵ cs or ϵ f cs becomes too small, the window of T would shrink to zero (null).Using linear interpolotion and extrapolation of the data points in Figs. 17 and 18, we obtain the results of T window in Tables V-VI at zT = (1, 2) respectively, each for six flavor combinations, and for all combinations of ϵ cs and ϵ f cs sampling from (0.1, 0.15, 0.20, 0.25, 0.30).For visual comparison, we plot the windows of T in Fig. 19, for a range of values of (ϵ cs , ϵ f cs ) from large to small ones.Tables V-VI and Fig. 19 are the first results of lattice QCD. It is interesting to see that the T windows of the approximate SU (2) CS symmetry are dominated by the channels of heavy vector mesons of (ūc, sc, cc).As the precision of SU (2) CS symmetry gets higher with smaller ϵ cs or ϵ f cs , the T windows of the light vector mesons (ūd, ūs, ss) shrink to zero, only those of heavy vector mesons survive.This suggests that the most attractive vector meson channels to detect the emergence of approximate SU (2) CS symmetry are in the (ūc, sc, cc) sectors, which may have phenomenological implications to the observation of the approximate SU (2) CS symmetry in relativistic heavy ion collision experiments such as those at LHC and RHIC.Moreover, the results of Tables V-VI and Fig. 19 also suggest that the hadron-like objects, in particular, in the channels of vector mesons with c quark, are likely to be predominantly bound by the chromoelectric interactions into color singlets at the temperatures inside their T windows of the approximate SU (2) CS symmetry, since the noninteracting theory with free quarks does not possess the SU (2) CS symmetry at all.MeV, as summarized in Table I.Our plan is to complete 21 gauge ensembles with three lattice spacings a ∼ (0.064, 0.069, 0.075) fm, which can be used to extract the continuum limit of the observables, for temperatures in the range of 160-1540 MeV. Using seven gauge ensembles with a ∼ 0.064 fm, we computed the meson z-correlators for the complete set of Dirac bilinears (scalar, pseudoscalar, vector, axial vector, tensor vector, and axial-tensor vector), and each for six combinations of quark flavors (ūd, ūs, ss, ūc, sc, These are the first results in lattice QCD.They immediately give the the hierarchical restoration of chiral symmetries in N f = 2 + 1 + 1 QCD, i.e., from the restoration of SU (2) L × SU (2) R × U (1) A chiral symmetry of (u, d) quarks at T ūd c1 to the restoration of or vice versa.In reality, for physical (u, s, c) quarks, we observe that Yet, in general, it is unclear to what extent (28) depends on the ratios of quark masses. One of the phenomenological implications of the hierarchical restoration of chiral symmetries is the pattern of hadron dissolution at high temperatures, which leads to the hierarchical dissolution of hadrons, and the hierarchical suppression of hadrons in the quark-gluon plasma. Theoretically, the meson with quark content qQ dissolves completely as q and Q become deconfined, i.e., when the screening mass of qQ is larger than its counterpart in the noninteracting theory with free quarks of the same masses.Presumably, m qQ scr ≥ m qQ(free) scr happens at the temperature T qQ d ≳ T qQ c1 , after the SU (2) L × SU (2) R × U (1) A chiral symmetry of qΓQ has been effectively restored.Thus, for N f = 2 + 1 + 1 lattice QCD at the physical point, one expects that the hierarchy of dissolution of mesons is exactly the same as that of the restoration of chiral symmetries (27), i.e., This leads to the hierarchical suppression of mesons in quark-gluon plasma, which could be observed in the relativistic heavy ion collision experiments such as those at LHC and RHIC.Here we recall the seminal paper by Matusi and Satz [18], in which it was proposed that the dissolution of J/ψ in the quark-gluon plasma would result in the suppression of their production in heavy ion collision experiments.To investigate whether (29) holds in N f = 2 + 1 + 1 lattice QCD at the physical point is beyond the scope of this paper. Besides the meson z-correlators, the restoration of chiral symmetry in high temperature QCD can also be observed in the baryon z-correlators [2].For QCD with N f = 2(3) massless quarks, the chiral multiplets of baryon operators have been obtained by the group theoretical methods, see e.g., Ref. [19] and the references therein.Now, for QCD with physical (u, d, s, c, b) quarks, with quark masses ranging from a few MeV to a few GeV, we expect that the hierarchical restoration of chiral symmetries can be observed from the degeneracies of z-correlators of baryon chiral multiplets.It would be interesting to see whether the hierachy of chiral symmetry restoration from the baryon z-correlators is compatible with that from the meson z-correlators. 1) A chiral symmetry in both ūd and ūs sectors has been restored at T < 190 MeV.This is the first step of the hierarchical restoration of chiral symmetries in N f = 2 + 1 + 1 lattice QCD at the physical point, from the restoration of SU (2) L × SU (2) R × U (1) A chiral symmetry of (u, d) quarks at T ūd c1 < 190 MeV to the restoratrion of SU (3) L × SU (3) R × U (1) A chiral symmetry of (u, d, s) quarks at T ss c1 ∼ 257 MeV.Note that, as discussed in Sec.I and Sec.III, the restoration of SU (3) L × SU (3) R × U (1) A chiral symmetry of (u, d, s) quarks requires the SU (2) L × SU (2) R × U (1) A chiral symmetry for all six flavor combinations (ūd, ūs, ds, ūu, dd, ss), which are reduced to (ūd, ūs, ss) if m u = m d .Here we have assumed that in high temperature QCD, the contribution of the disconnected diagrams to the z-correlator of qΓq is negligible in comparison with that of the connected ones, as discussed in Sec.I. Similarly, the restoration of SU (4) L × SU (4) R × U (1) A chiral symmetry of (u/d, d, s, c) quarks requires the SU (2) L × SU (2) R × U (1) A chiral symmetry for all six flavor combinations ūd, ūs, ss, ūc, sc, and cc.Next, we look at the sc panels in Figs.1-7.The SU (2) L × SU (2) R × U (1) A chiral symmetry seems to manifest at T = 385 MeV, and it becomes highly pronounced at T = 513 MeV. κ AT , and κ, as defined in Sec.III.In Figs.8-14, the symmetry breaking parameters of six flavor combinations are plotted as a function of the dimensionless variable zT , for seven temperatures in the range of 190-1540 MeV. TABLE I . The lattice parameters and statistics of the seven gauge ensembles for computing the meson correlators.The last 3 columns are the residual masses of u/d, s, and c quarks. TABLE II . The classification of meson interpolators q1 Γq 2 , and their names and notations. 1, at T = 192 MeV, we see that both T ūd c (the temperature for the restoration of SU (2) L × SU (2) R chiral symmetry in the ūd sector) and T ud 1 (the temperature for the restoration of U (1) A symmetry in the ūd sector) are lower than 190 MeV, i.e., T ūd c < 190 MeV and T ūd 1 < 190 MeV.Thus the SU (2) L × SU (2) R × U (1) A chiral symmetry of ūd has been restored at some temperature lower than 190 MeV, i.e., T ūd c1 < 190 MeV. TABLE III . The temperature T q1 q 2 TABLE IV . The temperature T q1 q 2 TABLE V . The approximate ranges of T satisfying the criterion (23) at zT = 1 for six flavor contents.The table lists all nonzero windows of T for all possible combinations of ϵ cs and ϵ f cs sampling from (0.1, 0.15, 0.20, 0.25, 0.30).Each T window is in units of MeV, with uncertainties ±5 MeV on both ends of the window. TABLE VI . The approximate ranges of T satisfying the criterion (23) at zT = 2 for six flavor contents.The table lists all nonzero windows of T for all possible combinations of ϵ cs and ϵ f cs
2024-04-26T06:51:24.863Z
2024-04-24T00:00:00.000
{ "year": 2024, "sha1": "02efb0136f4f179531011e8283937c3c2bbad6c8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1103/physrevd.110.014502", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "02efb0136f4f179531011e8283937c3c2bbad6c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
211430590
pes2o/s2orc
v3-fos-license
Sustainable BIM-Based Construction Engineering Education Curriculum for Practice-Oriented Training : The latest IT technology integration movements, such as building information modeling (BIM), have engendered changes in the technology and participatory organizations in the construction industry, which have resulted in process innovations and productivity gains. BIM lays the foundation for using a variety of new information that is not applicable to traditional construction methods. Construction companies are applying such information to various analyses, simulations, and learning and education projects to stimulate innovation. In Korea, however, since BIM was introduced in 2008, it has been used in various ways across diverse fields, but its contribution remains minimal. This is due to the inadequate competence level of BIM managers, who emerge from a system incapable of adequately educating BIM managers. In other words, the curriculum has not been able to impart the BIM skills necessary to accommodate the requirements of the industry. Only the most basic BIM modeling course is o ff ered, and even such a course is dependent on external instructors. This creates a gap with the existing construction engineering educational curriculum. This study proposes a BIM-based construction engineering educational curriculum that has not been attempted before to overcome these limitations and generate a BIM workforce to cater to the industry. construction-related departments, it is not surprising that the students who take these courses are not able to grasp the BIM-related needs of the industry and implement BIM for real problems. Table 1 shows the BIM course examples of major institute of South Korea. Due to the limitations of having to teach many things within a short period of education, most curricula have a structure in which they cannot give deep and balanced lectures on BIM theory and practice. In order to properly address these problems, a completely novel educational curriculum should be designed and developed. This curriculum should integrate IT modelling software such as BIM with conventional construction-related subjects, and the students should also be selected in accordance with the new change. It would be ideal if companies could participate in the new curriculum so that the students could be transformed into a project-based, learning-type (PBL) workforce who would be able to perform the practical jobs required in the industry upon graduation [7,10,39]. This study aims to develop a BIM-based construction engineering curriculum for practice-oriented training, in which the various academic and industrial needs are accommodated, enabling students to become leaders of the new age of the construction industry. For the development of the curriculum, the systematic course development methodology is applied to present educational objectives, composition of educational topics, composition of educational contents, and evaluation criteria. Furthermore, the developed construction IT curriculum is instituted in a Korean university and the detailed analysis results are presented in this paper. Methodology Unlike previous research studies that have added subjects related to BIM to an existing construction engineering educational program, this study aims to develop the entire curriculum of a department from scratch, including goals and objectives of the education, learning topics, composition of learning subjects, and transformation of evaluation methods. Two major curriculum models can be used to develop a curriculum that meets these objectives: (1) the product model and (2) process model [40]. The product model is a model that emphasizes plans and intentions, and the process model is a model that focuses on the effects of activities and education [41]. The product model and process model have opposite characteristics in terms of educational purpose, intent, and tools, as follows: 1. Product model: behavioral objectives model, interested in products of curriculum (e.g., Tyler and Bloom model) 2. Process model: focus on teacher activities and roles, student activities, emphasis on means rather than ends (e.g., Stenhouse) Unlike the process model, which relies on an experiential approach, the Tyler model is a representative example of the product model that is widely used in the development of science and technology curricula [40]. Since the objective of proposing a curriculum in this study is to create a new construction engineering educational program incorporating IT (Information Technology), the Tyler model is applied in this study. The study follows the 3 steps of the Tyler model: (1) curriculum preparation, (2) curriculum development, and (3) curriculum improvement [42,43]. In addition, it is verified that the developed curriculum is pedagogically and technically appropriate by implementing the curriculum in a newly established university in Korea. Figure 1 illustrates the methodology showing the objectives and procedures of this study. Preparation Stage of BIM Curriculum Development Instead of adding a few BIM-related subjects to the existing curriculum of construction engineering departments in universities, a construction-IT convergence course for training BIM and Preparation Stage of BIM Curriculum Development Instead of adding a few BIM-related subjects to the existing curriculum of construction engineering departments in universities, a construction-IT convergence course for training BIM and construction IT talent is proposed. However, it must have a completely different curriculum for each semester. The new curriculum enables students to acquire a comprehensive understanding of construction engineering knowledge and theories, as well as a variety of BIM and construction IT methodologies. Accordingly, the curriculum should be developed through a systematic procedure to impart training of BIM principles, knowledge, and skills necessary for the construction industry. In the preparation stage, which is the first stage of curriculum development, the following aspects should be considered: Goals, advantages, and obstacles in BIM implementation for the architecture, engineering, and construction (AEC) industry 2. Purpose of BIM education for the construction industry 3. Status, limitation, and problems of BIM education in existing construction programs 4. Difference of importance of BIM skills and knowledge between original construction tasks and BIM-based new tasks, as well as importance of new BIM-based construction engineering curriculum 5. The relationship between BIM and construction engineering tasks in companies 6. Areas of BIM implementation in the construction industry and in companies, as well as BIM knowledge and skills required for new employees 7. Other fields that converge with BIM, such as smart construction technologies 8. Importance of knowledge and theories of BIM-based construction engineering and management 9. Priority of BIM education in companies According to the current status of BIM education in Korea, public and private institutions as well as universities are biased towards imparting theory-based BIM education. Especially in the case of universities, the curriculum only contains one or two BIM classes [7]. This is due to a variety of reasons, namely the closed culture of existing university education, the limitations of BIM knowledge and application among construction engineering professors, and the difficulty of introducing external instructors who have combined capabilities of theory and knowledge. In particular, when a typical full-time faculty conducts BIM classes, lectures only cover related theories or general content, resulting in inadequate information necessary for practical application on site. On the contrary, if an external lecturer presides over the lecture, they will primarily focus on BIM modeling or data management. Hence, the lectures will give more focus only to practical BIM skills and software. This shows that both cases are extremely unbalanced [10]. According to the results of a survey conducted in major construction companies in Korea, the companies have higher levels of expectations from BIM engineers from universities [7,10]. Seven of the top eight companies have a BIM team. BIM applications, such as design (drawing) quality management, 3D visualization-based communication, scheduling and sequence planning, constructability reviews, and interference management, are carried out by all eight companies, and six companies or more carry out site logistics and construction system design. As for the knowledge and skills expected of university graduates, the companies require a basic conceptual understanding of the importance of BIM implementation in the construction engineering, areas of BIM implementation in the construction process, and BIM-based quantity take-off and cost estimation. As for software requirements, it is imperative that the graduate should be familiar with a BIM checker and simulation tool, such as Navisworks or Revit. The construction companies listed spatial trade coordination, communication, design quality management, constructability review, 4D simulation, and shop drawing Sustainability 2019, 11, 6120 6 of 16 as high priority applications of BIM and insisted that these aspects should be included in university education [10]. In summary, the current status of BIM shows that the existing BIM curriculum in university education does not meet the basic level of requirements. On a positive note, it is possible to develop a practical curriculum for cultivating human resources that meet the needs of the industry and academia. This curriculum should include: (1) general concepts and knowledge of BIM technology; (2) areas of implementation in the various construction processes (including visualization, communication, clash detection, and constructability review); (3) BIM-based construction engineering and management skills; (4) BIM project execution planning and BIM standards; (5) software compatibility; (6) expandability to other construction IT technologies. Development Stage of BIM Curriculum Development: Overview The curriculum development consists of five steps: (1) develop a basic framework that describes the curriculum's name, target, introduction, and characteristics; (2) set curriculum goals, objectives, and educational methods through various literature reviews on BIM implementation; (3) set learning topics and course schedules according to the goals and objectives using a systematic approach; (4) develop individual lectures and curricula for each learning topic and set lecture methods for each class; and (5) implement the developed curriculum and analysis of the implementation results. The curriculum framework involves setting the goals and objectives of the curriculum. In addition, the purpose of this study is to develop the entire curriculum by synchronizing optimal teaching methods with the duration for each learning topic. The name of the BIM curriculum is "BIM converged construction engineering curriculum", and the target of this curriculum are the students in the department of professional BIM training. These students are guaranteed jobs upon admission to the department through a contract established with a construction company. One of the characteristics of the curriculum is that there is an agreement with a construction company for hiring upon graduation. Additionally, it is an industry-academia collaborative curriculum jointly operated by the university and the company. Unlike the conventional construction engineering departments in South Korea, this is the first curriculum in which BIM and conventional construction engineering subjects are integrated to cultivate a BIM workforce. Both the knowledge and the skills necessary for BIM are taught in depth from the first year until graduation, and the PBL-type classes for various problems in connection with a company will be held to train the workforce. Furthermore, to develop a curriculum that can produce BIM professionals tailored to the construction industry, this should reflect the needs of the company in terms of individual lectures coupled with a variety of methods, such as PBL and industry-linked methods, and practice lectures, so that the new curriculum can be clearly differentiated from the existing BIM curricula. Based on these characteristics, the basic framework of the curriculum is presented, as shown in Figure 2. Students participating in the curriculum achieve a balanced understanding of basic BIM and construction IT knowledge and skills, as well as construction engineering and construction management theory and knowledge. In addition, emphasis has been put on the joint activities proposed by the companies to enhance the labor skillset in the construction industry. Through the balanced distribution of these 3 types of classes, theoretical knowledge and practical task capabilities can both be developed and enhanced. In order to facilitate early hiring of students, the structure of the curriculum is designed so that full-time education can be conducted for the 1st year students, and hiring and part-time education can be conducted for the 2nd and 3rd year students. management theory and knowledge. In addition, emphasis has been put on the joint activities proposed by the companies to enhance the labor skillset in the construction industry. Through the balanced distribution of these 3 types of classes, theoretical knowledge and practical task capabilities can both be developed and enhanced. In order to facilitate early hiring of students, the structure of the curriculum is designed so that full-time education can be conducted for the 1st year students, and hiring and part-time education can be conducted for the 2nd and 3rd year students. Development Stage of BIM Curriculum Development: Setting the Learning Goals Learning goals should make students industry-ready and employable. Setting learning goals is an important step in the systematic curriculum development process because goals provide milestones for the students, as well as invariably represent the knowledge, skills, and behavioral standards set by the curriculum [44]. The first step in setting learning goals was to study and review existing BIM-related courses, which the author analyzed in a previous study [7]. When setting the Development Stage of BIM Curriculum Development: Setting the Learning Goals Learning goals should make students industry-ready and employable. Setting learning goals is an important step in the systematic curriculum development process because goals provide milestones for the students, as well as invariably represent the knowledge, skills, and behavioral standards set by the curriculum [44]. The first step in setting learning goals was to study and review existing BIM-related courses, which the author analyzed in a previous study [7]. When setting the learning goals for the "BIM converged construction engineering curriculum," the knowledge the students will acquire should be scalable for implementation in an actual project. Therefore, the curriculum considers the industry status, significance, and expectations of a new BIM course. Based on pedagogical factors of Bloom's taxonomy, such as cognitive (mental skills, knowledge-based), affective (growth in areas of feeling or attitude, emotion-based), and psychomotor (manual or physical skills, action-based), the curriculum was categorized according to learning goals and objectives [45]. We developed the curriculum's learning goals and objectives through a thorough literature review and by conducting analysis on the purpose of conventional BIM courses (Table 2). Table 2. Learning goals and objectives for BIM-based construction engineering curriculum. Learning Goals No. Learning Objectives Type Understand the BIM knowledge (concept and theories) Development Stage of BIM Curriculum Development: Choosing Learning Topics Learning topics should be in line with individual goals defined in the earlier stages and should be based on the learner's activities. In addition, the learning topics should be established based on the knowledge and skills that will be acquired through the curriculum, taking into account the values and behavioral codes set by the construction industry [44]. The curriculum of the newly established BIM specialist training department consists of five learning topics and 19 subtopics, as shown in Table 3. In addition, these learning topics and subtopics are set according to the learning goals and objectives highlighted earlier ( Table 2). The main point in this process is to consider all the learning topics and reorder them so that the students can learn them sequentially and effortlessly for a period of three years. Thus, by incorporating all these considerations, we set up a learning schedule. Development Stage of BIM Curriculum Development: Organizing Learning Topics Based on the learning topics and schedule, the subjects of the BIM curriculum were organized. This study considered the issues in the subjects under this framework. • The curriculum should be aimed at cultivating the capabilities necessary for employability in a construction company, which can be quantitatively evaluated by the company. Upon completion of the curriculum, the students should be equipped to such a level that they can immediately work on actual tasks within a company. • The newly-established department has a three-year course. After one year of full-time study, two years of contract employment and part-time study are mandatory so that the subjects reflect the student's capabilities. • It should reflect the current trends in Korea's construction industry and the BIM response strategies of construction companies. • Learning topics and contents should be directly related to goals and objectives, and subjects should be organized to reflect theories and practices. • Subjects based on learning topics should be organized according to the three-year course procedure, making it easy for students to understand and develop their competence levels. • Construct curriculum contents that reflect all the learning goals, objectives, and topics, which are established during the development stage (Sections 3.3-3.5). Since the final purpose of this research was to apply the proposed curriculum directly to the newly opened departments, several brainstorm meetings and discussions were held to organize the curriculum contents with school officials and industry experts. The curriculum table was completed in consideration of the subjects students must take in each semester, difficulty level of each grade, and appropriateness of the curriculum arrangement. The curriculum of major institutes such as Associated General Contractors of America (AGC) and American Society of Civil Engineers (ASCE) were also studied. Subject syllabuses of universities in Korea, education contents of private organizations, papers on BIM technology research and education, and cases of BIM implementation in construction companies were considered. By selecting subjects appropriate for the three-year course, the students participating in the curriculum could acquire in-depth knowledge of the BIM technology and gain substantial construction engineering and construction management understanding. In addition, to maximize achievement of the learning objectives for each subject, the subject operation method (in other words, the lecture method) was selected and presented [44]. Table 4 shows the semester-wise subject schedule created by the systematic course development procedure. The three years of the curriculum comprise a total of 36 subjects. Essentially, each year's education objectives are set differently. The first stage of the course focuses on acquiring basic construction engineering theories, BIM fundamentals, and skills for employment after completion. Since second-and third-year students are working in the field and participate in the curriculum in the form of part-time education, the curriculum for them is mainly administered on weekends and via on-line teaching. In addition, advanced theories and skills for BIM and construction management are studied in the form of project-based learning and joint company learning in order to deepen basic knowledge and skills learned through the first grade course and the practical capabilities acquired by working with the company. -Quality and safety management -BIM API practice -BIM enhancement for free-form structure 2 -3D scanning and 3D printing In the first year curriculum, all learning is lecture-oriented. BIM software classes are held in either a practice-type or company-linked-type environment, with company experts presiding over lectures. In the second year subjects, all classes, except "quality and safety management", are conducted using both lectures and practice-type classes, and the subjects named "BIM enhancement for free-form structure" and "BIM prototyping project" are company-affiliated. The BIM implementation technology for the free-form structure is in the form of practice based on the latest technology. For the prototype project, students directly work on a project presented by the company's experts, which enhances their practical capabilities. The third year curriculum is largely composed of three types: in-depth courses on BIM software and ICT, BIM-based construction engineering and management strategies, and PBL classes, such as research and development jointly carried out by a company. The curriculum aims to help students become BIM professionals by achieving the goals and objectives set by the curriculum through the theory-oriented stages in the first year and the practical-oriented stages in the second year. To verify the achievement of the objectives, corporate experts participate in the assessment. Improvement Stage of BIM Curriculum Development The BIM-based construction engineering curriculum proposed in this study was developed by a systematic course development procedure. For verification, it is necessary to apply the curriculum to a newly opened department of a university [39,46]. In addition, since construction companies hire students after completion of the curriculum and engage them in practical tasks, it is necessary to have a validation process and feedback system from the officials who have expertise in BIM. For the curriculum verification process, we distributed preliminary interview sheets to a total of 10 people, consisting of 3 education experts, 2 experts from private educational institutions, and experts in construction engineering, construction management, and BIM. We collected their answers and then conducted a telephonic interview. The experience levels of the interviewees are summarized in Table 5. The evaluation forms were e-mailed to interviewees and the answers were collected. In addition, telephonic interviews were conducted for all 10 interviewees to obtain more comprehensive and detailed comments on the evaluation information received in writing. The main points for evaluation requested in the interview are as follows. • Assessment of the overall framework of the proposed BIM-based construction engineering curriculum; • Learning goals and objectives; • Assessment of the relevance of learning topics and their association with learning goals and objectives; • Assessment of the subject, procedure, and lecture method of the curriculum organization table. The comments received from the group of 10 experts are summarized in Table 6. Based on the comments and suggestions of these experts, the above four parts were evaluated and revised. In particular, the experts consisted of academic experts from universities and educational institutions, and industry experts from construction companies and BIM companies. Their comments suggested varied orientations depending on their respective affiliations. University professors suggested that along with practical education, construction engineering knowledge and theory should be included also. Instructors from BIM educational institutions, which are not universities, emphasized the participation of industry experts in the curriculum and adaptation to the practical work environment through use of different software. Among industry experts, BIM managers working for construction companies rated the students capabilities after completion of the curriculum as a critical issue, and for this purpose, the association with companies was an important feature of the curriculum. On the contrary, the CEOs of BIM companies responded that the curriculum should not only enhance BIM competence across broad areas but also enable students to set up their own companies in the future. Through the improvement stage, the BIM-based curriculum was revised, and supplementary points addressed in the interview and the revised curriculum were actually implemented in the newly opened "department of BIM converged construction engineering" in the university. Expert e -Since BIM professionals will be active in diversified ordering and contract environments, such as lean construction or pre-construction, training in these aspects is also required. -Adaptation training for actual working environments is needed for the students to understand how the project is executed and to be immediately put into real projects. Expert f -A separate framework is needed for the development of student's competences by grade-based assessment of students per semester. - In particular, companies that would hire them should be able to participate in the curriculum, such as in the education, management, and assessment. Expert g -As the construction industry trends are not only related to BIM, but also to ordering and contract methods and other smart construction and construction IT technologies, it is recommended that the latest technology education, such as IT, robotics, and deep learning, be combined in addition to the construction industry information. -To complement the relatively weak construction engineering knowledge and skills, operation of the curriculum in connection with construction engineering is needed. Expert h -Since first year courses are full-time education, it will be more efficient to establish a separate plan for linking companies with students to give them a sense of BIM skills. - In particular, corporate mentoring will be helpful in cultivating working-level talents from an early stage. Expert i -Although this curriculum takes the form of an agreement with a company, in the future, the curriculum should empower students to develop new types of services and even engender their own startups. -Unlike large corporates, in BIM companies, often the employees oversee the entire BIM project. Hence, strategic training for construction project management and BIM operation is required. Expert j -Some subjects, such as BIM, API, and BIM-ICT, are thought to be related to the essential skills and qualities required for BIM professionals, and subjects similar to these should be added. Implementation Stage of BIM Curriculum Development After the improvement stage of the "BIM-based construction engineering education curriculum", which was developed using the systematic course development approach, the course was implemented in the educational curriculum of the "department of smart convergence engineering", which was recently established in Hanyang University, ERICA campus. There are four majors in this department, one of which is construction IT convergence. The proposed BIM curriculum was applied to the educational curriculum of the department of construction IT convergence. This is a complete curriculum, reflecting most of the set learning goals and objectives, learning topics, and subjects. Through this, the department became the first in Korea to establish a separate department for the development of BIM professionals. It has established itself as a novel department in which major companies in the construction industry in Korea, including construction companies, BIM companies, construction designers, CM companies, and software vendors, participate in the overall process of the curriculum. Summaries and Conclusions The purpose of this study was to develop a BIM-based construction engineering curriculum. To achieve this, the learning goals and objectives of the education were established in order to cultivate accomplished BIM professionals that can immediately perform practical tasks from the point of hiring. The curriculum was developed using the system course development approach. Through the application of systematic course development, the learning goals and objectives of BIM courses were established and the learning topics were established based on these. In particular, a curriculum table was developed by arranging learning topics satisfying the requirements of departments, namely a three-year curriculum and employment with companies in the second year. To review and supplement the developed curriculum, we conducted interviews with construction engineering and BIM experts in industry and academia, and the revised learning goals and objectives, learning topics, and curriculum were implemented in a newly opened department at Hanyang University, ERICA campus. Using the logical development process, the maturity of the curriculum was verified, and upon actual implementation, the value of the curriculum was validated. Above all, the advantages of this educational curriculum is that it reveals the problems in the existing curricula from a practical standpoint, such as theory-based lectures in traditional Korean universities or extreme practice-oriented learning in public and private educational institutions. The BIM demands of the construction industry could not be satisfied by these approaches. Simply inserting BIM into the existing construction engineering curriculum does not enable the development of the level of BIM personnel required by the construction industry. To overcome this, the systematic course development application was applied and the curriculum was improved through discussion and collaboration with various industry experts. In particular, it is a great advantage that industry experts can incorporate their needs into the curriculum by participating in various activities, such as curriculum operation, student education, evaluation, and collaboration. Korea's BIM education remains at a very primitive level from the perspective of industry demands and technical requirements in comparison with foreign countries with advanced BIM implementation levels. The BIM-based construction engineering curriculum, newly developed for the enhancement of capabilities which can handle theory, practical tasks, and creativity for the resolution of actual problems, provides a logical approach that can be utilized for future BIM curriculum development, not only in Korea but also abroad. It can also be used to develop curricula in other areas based on pedagogical principles. This study presents the results of implementation of a new type of BIM curriculum. Through future research, we would like to present strategies of operation, student selection, evaluation, and business linkage for the implemented curriculum. In particular, we aim to develop the curriculum further through quantitative evaluation of the quality of the curriculum proposed in this study and the value gained in terms of meeting company demands.
2019-11-07T15:02:08.158Z
2019-11-03T00:00:00.000
{ "year": 2019, "sha1": "88bc397656f2e89cb974c6d936232fd9b415ed1d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/21/6120/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cb664ae4182aeb558112aa274af0bffc63b91d42", "s2fieldsofstudy": [ "Engineering", "Education" ], "extfieldsofstudy": [ "Economics" ] }
55230345
pes2o/s2orc
v3-fos-license
INFORMATION STATE OF SYSTEM ESTIMATION As far as the behavior of the object under study is random, it is difficult or even impossible to create an adequate program of its regular maintenance. A better result provides an adaptive servicing algorithm for the needs of current monitoring and control of the object. Nowadays, both different adaptive algorithms and capabilities of a sample group coding are effectively used for this purpose. The demand to the channel capacity regarding the required decrease of the binary digit rate has always been an actual task. Recently, it is observed that the creation of a new system is frequently based on the employment of the entropy measure and permutation encoding. Such applications are known to be used for biometric systems, cryptography, body wireless networks and others. A concrete combination of active source addresses (with significant samples) may be considered as some generalized image of the object under study. It can be used as information for the control function of the next level of the cyber-physical system as well. The paper deals with considering the possibility of the most efficient usage of a transmission channel capacity and receiving of a generalized image of a servicing object state as the entropy estimation of its sensors activities. The estimating operation procedures are also described. The ways of such tasks solving are described and the above mentioned coding procedure is presented. Copyright © Research Institute for Intelligent Computer Systems, 2015. All rights reserved. INTRODUCTION The state of the object being observed is estimated by using a set of different transducers, the so-called measurement data sources.The primary data are processed by further means of unification and the subsequent ADC.Optimal servicing program should to the utmost meet the needs of the object tracking system.Since the behavior of the object under study is random, it is difficult or even impossible to create an adequate program of regular maintenance.A better results provides an adaptive servicing algorithm for the needs of current monitoring and control of the object [1,2].The conditional mean of the system state vector may be found by passing the conditional mean of the measurement history.There were considered stationary and non-stationary data, noiseless-channel versions of the PCM, predictive quantization, and predictive-comparison data compression systems; ensemble-average performance of the nonlinear filters was derived [3].In particular, this can be realized using a polynomial predictor or adaptive commutator.It is capable of separating from the i-th primary measuring signal the samples that go beyond the defined tolerance field whose value is given by the permissible quantization error.Unlike regular surveys, the flow of the samples essential for the consumer appear in a chaotic time moments, as well the order they appear from various sources is unknown.There is a need for marking the onset of the samples and identifying their affiliation (i.e., addressing) to a particular so-called active source.The concrete combination of these addresses of significant samples of active sources may be represented by some generalized image of the object under study.It can be used as information for the control function of the next level of the cyberphysical system as well [4].If an individual address is used for source identification, then it is encoded by primitive binary code.In this case, it would require binary digits (here -n is the number of sources beeing measured).A paradoxical situation is observed: on the one hand, the adaptive maintenance reduces the information redundancy of a system measurement data due to the availability of the real aggregate activity sources.However, cost of the service suppliment by using the individual addresses made the situation worse.An addressing of the group of non -redundant samples turns out to provide more promising results.To this end, there may be used a permutation group coding [5], because its number of symbols per one sample is close to the entropy of the active sources totality.While the use of primitive coding for individual source address corresponds with the same activity of each source, i.e., equal to the maximum entropy value of sources totality (i.e., equiprobable sources).Both permutation coding and entropy measure are often effectively used for wireless networks [6][7][8][9], dynamic [10,11] and other systems [12][13][14], especially for biometric usage.For example, body wireless network uses an intrinsic characteristic of the human body as the authentication identity or a means of securing the distribution of a cipher key to secure inter -body area sensor network communications [6].The relationship was studied between a novel personal entropy measure for online signatures and the performance of several state-ofthe-art classifiers which showed that there is a clear relationship between such an entropy measure both of a person's signature and the behavior of the classifier [15].Currently, almost all systems involve an identity authentication process before a user can access the requested services such as online transactions, entrance to a secured vault, logging into a computer system, accessing laptops, secure access to buildings, etc.Therefore, authentication has become the core of any secure system, wherein most of the cases rely on identity recognition approaches.Biometric systems provide the solution to ensure that only a legitimate user and no one else access the rendered services.There was analysed the information content of the haptic data generated directly from the instrument interface [12].It was successfully applied to the cryptography needs [16]. For some well-known chaotic dynamical systems, it is shown that the complexity of their behavior is particularly well described by entropy measure in the presence of dynamical or observational noise [7].It was shown that the metric and permutation entropy rates -measures of new disorder per new observed value -are equal for ergodic finitealphabet information sources (i.e., discrete-time stationary stochastic processes).Finally, the equality of permutation and metric entropy rates is extended to ergodic non-discrete information sources when entropy is replaced by differential entropy in the usual way [8].Amplitude quantization and permutation encoding are two approaches to efficient digitization of analog data [10].The recently proposed conceptually simple and easily calculated measure of permutation entropy can be effectively used to detect qualitative and quantitative dynamical changes [17]. We propose both to use the entropy measure for the examination of the information state of object and for proper algorithm creation. The non-redundant samples introduce some permutation set sequence [18]: here,   j x is x -type symbol at the j-th position of permutation ( It is necessary to consider two situations: the statistic of totality system sources activities is a priory known and unknown [19,20]. STATISTICS OF SYSTEM SOURCES ACTIVITIES IS A PRIORI WELL KNOWN In this case, both sources set, whose shape sequence of the non-redundant samples and the amount of the granted positions are fixed.However, contrary to the regular type system, the source signal sampling is random, corresponding to the signal current behavior.At the length N  1 , the i-th non-redundant selective values will occur in the sequence boundaries N i times   because on the sequence any position can be a non redundant value of arbitrary packed multiplexed sources.Nevertheless, due to the convergence of the event frequency to its probability at a considerable amount of experiments, for N  1 , it is conditionally possible to arrange all the sequences into two subgroups: typical, for which N N i i   and atypical -for which this relation is variable.Thus, all the typical sequences aggregate the probability with the lengths N increasing and will be guided to one, while the atypical sequences will tend to zero, so there will be only a necessity of the typical sequences enumeration.From the mathematical point of view, such N-positions sequences is a permutation with the repetitions [5,10,21,22], in which each i-th element address will occur in the sequences boundaries N i times.The different typical sequences number is as follows: and each corresponding number we shall present as follows: This number we shall term as a code of a disposition, the so-called Block Number Code.Having taken advantage of a Stirling formula [22] for factorials n r e n n e !  2 , we shall note For one sample of a sequence, the corresponding part of a Block Number Code is as follows: 2 log ( ) The above-obtained relation is, by the upper estimation of number log 2 Q , as follows: As for the left-hand part of an inequality, we have Analyzing the polynomial formula [22]     ), we observed that the different sets media are the natural numbers i K .In our set, i i K N  (that is possible to realize, as ).In addition, this combination gives only one possible addend of this sum.Therefore, it is obvious that the sum (3) will be larger than its non-negative composite, namely: is also accepted, then we shall have the following: Having compared equations ( 3) and ( 8), at the logarithm basis more than one, it is rightly to note that log ... , as it was necessary to prove. STATISTICS OF SYSTEM SOURCES ACTIVITIES IS A PRIORI UNKNOWN In this case, the sequence set and its positions, i.e., distribution between separate source samples, are not fixed.Therefore, besides an arrangement code by a size k Q , it is also necessary to present the information on its length by a size k q and the sequence sources set by a size k н , since the activity distribution    i a priori are not known.That is, the sequence servicing information However, in practice, for the frame synchronization facilitation, it is necessary to work with sequences of a stationary value of length, in particular, from stationary values of both measurement data and servicing parts.Contrary to the above parsed structure, it is thus impossible, shaping a servicing part, to do without the Set and the Block Number Codes, whose digit capacities k н and k Q , accordingly, are fixed. Let us assume that the sequence shaping ceases at the sequence information part or the Block Number Code digit grid filling. 1) Let the first sequence of Data Set be filled.The Set Code length k н is determined by an information part length value of N, which should be large enough, nevertheless, from physical requirements, restricted.Its value should be such that the least fissile of the totality sources has filled even in one position of sequence of Data Set parton the one hand and limits by a value practically implemented digit grids of a Block Number Code Q k -on the other hand.Thus,   min max The probability that the Set Code is filling faster than the Block Number Code, coincides with the probability that a length of the sequence of Data Set part exceeds the selected one.It is neglectfully small, because the Block Number Code length k Q optimality is close to entropy.Thus: , where  and  are arbitrary, certainly given positive small values. The Block Number Code binary digits number, which is necessary on one selective sample value, is determined by totality sources activities entropy.Thus, it will be a maximum at the equiprobable activities distribution.Hence, at a given length of a Block Number Code, it is possible to get the sequence of Data Set part selective values amount, at which there will already be the filling 2) The first Set Code digit grid is filled.Thus, if positions of the sequence of Data Set part are not yet exhausted, then the remainder of positions can be allocated on the "shadow" interrogation or other additional information. At a Set Code shaping, similarly to equation ( 2), here it is necessary to enter one more numeral boundary [23], if only to designate the unused sequence of Data Set part positions, that is The limiting Set Code length is determined by the greatest possible sequence information part selective samples values N max max max max max 2 2 max log log Expenditures, that are present on one selective sampling value, are as follows: Taking into account the equations ( 10) and ( 11), we shall show an asymptotic optimality of the given approach, i.e., Thus, this enumeration method is asymptotically effective at a sequence considerable size of N, because the minimum possible expenditures on an enumeration cannot be less than the entropy value [24,25].Such an approach may be used at designing of intelligent devices [26][27][28]. ACTIVITY ESTIMATION BASEMENT Let us suppose that the data compression of the measurement system is based on the adaptive commutator principle.Then, all analog sources are sampled at a constant rate with the period T. In each sampling point, an adaptive commutator chooses among the total sources the most active one, i.e., the chosen source has the largest among other sources instant difference value, normalized with respect to its source analog measurement signal mean-square deviation (Fig. 1).The samples of the rest sources are supposed to be redundant.The i-th most active source sampling value takes place at the i-th memory cell for the next sample time comparison, and the source activity manifestation is indicated at the i-th counter.Practically, the i-th source difference is estimated between the current sampling moment value and the previous activity manifestation value, which is picked from its memory cell.It is known [29] that the i-th source intensity , here, i  and i  are the i-th source mean-square frequency and mean-square deviation, respectively; X m is the sources totality maximum discretization error mean value.In this case activities sequence is following: 1,2,2, ....The i-th source relative activity i  is determined as relative intensity , here,   is the system totality sources mean-square frequency (the last intensity to mean-square frequency transformation is possible at the equality of all sources discretization error values). During the analysis time a T , the current totality system sources absolute activity distribution is formed at the n counters.Therefore, it is possible to gather the non-redundant samples frame and its real time group code mapping.Moreover, due to all the sources counters contents, using the same algorithm, general object state mapping is realized.It was proved that the certain sources samples number corresponds to the unique single-valued coding combination (code value) [18].The all sources activity distribution code (Block Number Code) is sent by the transmission link to the higher level of hierarchy. PROCESSING ALGORITHM DESCRIPTION The number of permutations is equal to here, i N is the i-th type symbols number among the The permutation numerical coding algorithm is found on the chain division after the permutation place symbol [28].It was noticed that a power of each subset is proportional to the ratio of a number of certain type of symbols to the total sequence positions number.Thus, for subset m S with m-type symbol at the first position . At the second step, each subset divides after the type symbol which takes the second position in the permutation.A power of a newly formed subset is proportional to the ratio of a certain type symbol number, which we meet from the second to the last position of the permutation to the total sequence positions number at this step, here it means to (N-1).However, if at the second position we have the same m-type symbol, then the ratio is , because at the second step we have yet only (N m -1) m-type symbols and (N-1) total positions, and so on.At a certain j-th step, some type of symbol cannot appear if its number has already exhausted.Such a procedure makes it possible to have a definite correspondence between a certain permutation set S and its number of the natural row 0-(M-1).It was suspected, that the true enumerative coding would be if the sequence number is formed as follows: 1), ( ) ( (13) here, M j (i) is the S j (i) subset power value ; it should be noted that both the subset S j (i) permutations, and the analysed permutation p have (j-1) identical positions and the i-type symbol at the j-th position of permutation. Therefore, the first j permutation positions of a subset S j (i) are fixed.The number of such permutations defines probable permutations of the rest (N-j) symbols.Within there is , mi) and [N i -R i (j)-1]-i-type symbols; here, R m (j) is the number of m -type symbols among the first (j-1) positions of the permutation p. Thus, the power of subset S j (i) (mi) . (14) The kernel of the last expression ( 14) We can note that here, j C is the quantity of symbols whose number- type is less than the symbol number-type located at the j-th position of the permutation.This algorithm can be used if the absolute activities values   i N are known.Therefore, this is convenient for the activities distribution reflection.This is the so-called Set Code. Non-redundant samples mapping of all the sources during the real appearance of the measurement data corresponds with the unknown activities values   i N at the coding word formation [18, 27. 28].This is the so-called Block Number Code. For this case, it was noticed that each j-th position counted from the beginning might be considered as the l-th position from the end.Thus, l = N-j+1. ( The kernel (15) corresponding to the j-th position from the beginning is the same for the l-th position from the end Thus, using expressions ( 18)-( 20) in equations ( 16) -( 17), we receive a new algorithm [18]  Permutation elements can be analysed in the order of their appearance and the number   l N i is considered as the number of i-type symbols among the l analysed.The permutation numbers obtained after (17) and (22) equations should be the same for the same input conditions, but with the opposite order of the appearance of elements.Algorithm description is as follows [30]: Comment: D assign value of ) ( } 19: } Such servicing information perfectly corresponds to the information entropy of the object sources and may be used for an express analysis of the current state of the object.At each step of analysing the information state of the investigated object, the most active sensor is chosen.An output analog signal of any sensor is conditioned and a certain address number is prescribed to it.To illustrate the process of analysing let us consider a simplified example of sensor network which consists of three sources, and the analyzing period consist of six steps (i.e., n=3, N=6).Let us suppose that we received the activities sequence as follows: 2, 1, 1, 3, 1, 3.This means that the first active sensor has number 2 and the last has number 3. After formula (22), in this case we obtain the following: quantity of sequence positions Ni[] -array of type symbols number among the N sequence positions size of quantity of different values in sequence X[] -set of values to encode size of N Output: Kp -value of kernel K(p) R[] -array of numbers of i -type symbols among the permutation p Algorithm's initialization: * (l-1) / ( Ni[X[l]] -R[X[l]] )Comment: Acc assign value of  Kp ← Kp + D * Acc 18: The code (i.e., group number) that corresponds with the situation is following (21): It was formed in real time.Another order of addresses results in a different group number.The information entropy estimation (4) is calculated as follows:
2018-12-12T20:01:28.993Z
2016-03-31T00:00:00.000
{ "year": 2016, "sha1": "b35c56f81609358cbc5df9e8907ff7d03208b71f", "oa_license": "CCBY", "oa_url": "https://computingonline.net/computing/article/download/828/753", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b35c56f81609358cbc5df9e8907ff7d03208b71f", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
221116052
pes2o/s2orc
v3-fos-license
Acute, Subacute, and Genotoxicity Assessments of a Proprietary Blend of Garcinia mangostana Fruit Rind and Cinnamomum tamala Leaf Extracts (CinDura®) The present communication describes a battery of toxicity studies that include an acute oral toxicity, a subacute twenty-eight-day repeated oral dose toxicity, and genotoxicity studies on a herbal formulation CinDura® (GMCT). This proprietary herbal composition contains the extracts of the Garcinia mangostana fruit rind (GM) and the Cinnamomum tamala leaf (CT). The toxicological evaluations were performed following the Organization for Economic Cooperation and Development (OECD) guidelines. The acute oral toxicity study in Wistar rats suggests that the median lethal dose of CinDura® is at least 2000 mg/kg body weight. Acute dermal and eye irritation tests in New Zealand white rabbits indicate that the test item is nonirritant to the skin and eyes. A twenty-eight-day repeated dose oral toxicity study was conducted in male and female Wistar rats using daily doses of 250, 500, and 1000 mg/kg body weight, followed by a fourteen-day reversal period for two satellite groups. The CinDura®-supplemented animals did not show any sign of toxicity on their body weights, organ weights, and on the hematobiochemical parameters. The gross pathology and histopathological examinations indicated no treatment-related changes in the experimental animals. Overall, the no-observed-adverse-effect level (NOAEL) of the herbal blend is 1000 mg/kg body weight, the highest tested dose. Also, the results of the bacterial reverse mutation test and the erythrocyte micronucleus assay in mouse bone marrow suggest that CinDura® (GMCT) is neither mutagenic nor clastogenic. A preclinical study demonstrated that CinDura ® -supplemented Swiss albino mice had improved grip strength and swimming performance in a forced swim test. Further, a double-blind, placebo-controlled clinical study established that GMCT enhanced muscle strength and endurance among young males when supplemented in combination with a resistance training program [1]. Garcinia mangostana, popularly known as mangosteen, grows in the Asian region such as in Malaysia, Myanmar, ailand, Philippines, Sri Lanka, and India. e fruit contains soft and juicy edible pulp [2]. e pericarp of this fruit or its fruit rind is used as a traditional medicine in treating various ailments such as trauma, skin infection, abdominal pain, dysentery, and wounds [3]. A major xanthone, α-mangostin, is rich in the G. mangostana fruit rind extract [4]. α-Mangostin is a potent bioactive phytochemical responsible for anti-inflammatory, analgesic, antioxidant, and antilipogenic activities [3,4]. Cinnamomum tamala is commonly known as tejpatta, Malabar leaf, or Indian bay leaf in India. C. tamala leaves are widely used as a flavoring agent and spice in a variety of culinary preparations and as a natural food preservative in the Asian countries [5]. In Ayurveda, these leaves have a high medicinal value. Traditionally, C tamala leaves are used to treat diabetes, hyperlipidemia, inflammation, hepatotoxicity, diarrhea, etc. [6,7]. C. tamala leaves extracts, or its essential oil, were shown to have potential anti-inflammatory, antioxidant, antimicrobial, antidiabetic, and hepatoprotective effects in in vitro and in vivo models [8]. Despite extensive use of various preparations of G. mangostana fruit rind or C. tamala leaves as medicine or food, the toxicological evaluations of GMCT are vital to establish its safety for human use. Earlier, Jujun et al. reported that repeated oral supplementation of an ethanol extract of G. mangostana fruit rind for twenty-eight days did not show systemic toxicity in Sprague Dawley rats, which concluded its safety for human consumption [9]. However, to the best of our knowledge, no report is available so far on preclinical toxicological evaluations, including genetic toxicity studies on C tamala leaves. erefore, it is crucial to conduct a systemic and genetic toxicological assessment to ensure the safety of the herbal blend for human consumption although the individual ingredients are believed to be safe. Here, we present acute oral toxicity and a repeated dose 28-day oral toxicity studies in Wistar rats to establish the systemic safety of GMCT. Further, a bacterial reverse mutation assay and a micronucleus assay in mouse bone marrow erythrocytes present the genetic safety of the blend. All experiments followed the testing guidelines of the Organization for Economic Cooperation and Development (OECD). Test Item. e test item GMCT (CinDura ® or LI80020F4) is a proprietary herbal composition manufactured in a Good Manufacturing Practice-(GMP-) certificated facility of Laila Nutraceuticals, Andhra Pradesh, India. It is composed of seven parts of an herbal blend containing aqueous ethanol extracts of the Garcinia mangostana (GM) fruit rind and Cinnamomum tamala (CT) leaf at 1 : 2 ratio and three parts of the excipients, microcrystalline cellulose and Syloid. e final blend CinDura ® is a light greenishbrown to dark brown powder with a characteristic odor and taste and is standardized to contain at least 3.5% α-mangostin and 0.1% rutin, as described earlier [1]. e manufacturing process included extraction and processing of the individual plant raw materials under appropriate process controls and comprised typical process steps including pulverizing, extraction, concentration, and drying, followed by blending of extracts along with the excipients and sieving. e final product was tested for residual solvents, heavy metals, and microbial growth as part of the quality control check. e plant raw materials, G. mangostana fruit rind and C. tamala leaves, were purchased from Indonesia and Nainital, Uttarakhand, India, respectively. Following the taxonomic identification of the plant raw materials, Laila Nutraceuticals R&D Center, Vijayawada, India, preserved their voucher specimens. e methods of extract preparations and the phytochemical standardization of GMCT are described earlier [1]. Experimental Animals. Pathogen-free adult 7-8 weekold male and female Wistar rats were purchased from Vivo Bio Tech Ltd., Hyderabad, India. Male and female Swiss albino mice (6-8 weeks old) were obtained from Palamur Biosciences Pvt. Ltd., Hyderabad, India. New Zealand white male rabbits (9-10 weeks of age, weighing 2.0-2.8 kg) were procured from Mahaveera Enterprises, Hyderabad, India. e animals were acclimatized to the housing conditions (22 ± 3°C with 40-70% relative humidity, and in a 12 h lightdark cycle) for one week before the start of the experiments. e animals received a standard rodent chow and reverse osmosis-filtered water ad libitum. All procedures related to animal handling and investigations followed the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA) guidelines for animal care. e Institutional Animal Ethics Committee (IAEC), Laila Nutraceuticals, Andhra Pradesh, India, approved the study protocols. Acute Oral Toxicity Study. An acute oral toxicity test was conducted in overnight-fasted female Wistar rats (180-200 g body weight), following the OECD Test 425 guidelines [10]. e code number of the ethical document of this study is LN/IAEC/TOX/LN140405. A limit test was performed by sequential use of five female Wistar rats in an interval of 48 hrs. GMCT was dissolved in distilled water, and a single dose of 2000 mg/kg was administered through oral gavage. Following the dose administration, the animals were monitored for any clinical signs of toxicity every hour for the first four hours and then every day for the next 14 days. All animals' body weight, food, and water consumptions were recorded daily. Following the CO 2 euthanasia at day 15, the vital organs and tissues were examined for any gross pathological changes. Acute Dermal Toxicity Study. An acute dermal toxicity test (ethics approval document number: LN/IAEC/TOX/ LN140104) was conducted in five male (weighing 239-264 g) and five female (weighing 219-235 g) Wistar rats using a single dermal application of GMCT (2000 mg/kg BW), following the OECD Test Guideline 402 [11]. In brief, GMCT was moistened with distilled water and evenly distributed on a gauze patch. e gauze patch was then applied to the intact skin (4 × 4 cm) and held securely using a nonirritating adhesive tape for 24 h. After the patch was removed, the residual test item was wiped off the skin using water-soaked cotton. e animals were observed for signs of toxicity after 30 min of application, at every hour up to 6 hr and then daily for consecutive 14 days. e animals were examined for any gross pathological changes. Acute Dermal Irritation Study. e acute dermal irritation test (ethical approval document number: LN/IAEC/ TOX/LN141203) was performed on healthy young adult male New Zealand white rabbits (2.0-2.25 kg), following the OECD Test Guideline 404 [12]. Five hundred milligram of moistened GMCT was applied on the shaved skin at the dorsal trunk area of the rabbits with the help of a gauge pad (6 × 6 cm). e gauge pad was securely held on the skin using a nonirritating adhesive tape for 4 hr. After removal of the gauge pad, the signs of skin reactions were observed for up to 72 hr. First, the test procedure was performed on one rabbit and then repeated on two more rabbits. After the observation period, the animals were sacrificed; their vital organs were examined for any gross pathological changes. Acute Eye Irritation Study. e acute eye irritation potential of GMCT was tested (ethical document code number: LN/IAEC/TOX/LN141204) in healthy young male rabbits, following the OECD Test Guideline 405 [13]. Before the test item application, the eyes were examined; 100 mg of the test item was applied in the conjunctival sacs of their left eyes. Following the dosing, the eyelids were held together for a few seconds to prevent the loss of the test item. After 1 hr, the treated eye was rinsed with sterile saline. e untreated right eye served as the control. e signs of eye irritation in the conjunctiva, iris, and cornea of both eyes were scored at 1, 24, 48, and 72 hr after the test item application using an ophthalmoscope [14]. First, the test procedure was performed on one rabbit and then repeated on two more rabbits. After the observation period, the animals were sacrificed; their vital organs were examined for any gross pathological changes. Twenty-Eight-Day Repeated Dose Oral Toxicity Study. A twenty-eight-day repeated dose oral toxicity study (ethics approval document number: LN/IAEC/TOX/LN141206) was conducted in Wistar rats, following the OECD guidelines 407 [15]. Rats of both sexes (n = 10; five males and five females) were randomly allocated into six groups-four primary groups and two reversal groups. GMCT prepared in 0.5% CMC-Na was administered daily for twenty-eight days in single oral doses of 250 (low dose, G2), 500 (mid dose, G3), or 1000 mg/kg BW (high dose, G4). e vehicle control group (G1) received 0.5% CMC-Na with no test item (0) through oral gavage daily for 28 days. On day 29, the animals in the primary groups G1, G2, G3, and G4 were euthanized by CO 2 inhalation. e two reversal groups, G1R and G4R, received oral gavages of 0.5% CMC-Na containing no test item (0) and 1000 mg/kg BW/day GMCT, respectively, for twenty-eight days and were sustained in their in-life phase with regular rodent chow for an additional fourteen days, without the vehicle or GMCT supplementation. e dose verification and homogeneity of the test product were analyzed using HPLC on days 1 and 25 of the study. All dose formulations were prepared fresh every day and administered at an equal dosing volume of 10 mL/kg BW. e body weight was recorded weekly and at necropsy; the clinical signs, morbidity, or mortality were recorded every day during the study duration. Body weight and feed consumption were measured weekly and at necropsy. After CO 2 euthanasia, the animals' vital organs were collected for macroscopic and microscopic pathological examinations. e organs harvested were the liver, kidneys, heart, lungs, brain, spleen, adrenal glands, thymus, testes, epididymis, seminal vesicles, ovaries, and uterus. e organs were weighed on an electronic balance with 0.01 g accuracy (Mettler Toledo, Columbus, OH). For histopathology examinations, the organs were fixed in 10% neutral buffered formalin for 48 hr. e paraffin-embedded organs were cut into 5 μm thick sections using a rotary microtome. e tissue sections were processed in graded alcohol and stained with hematoxylin and eosin. e stained tissue sections were examined under a light microscope Axioscope (Carl Zeiss, Munich, Germany). Bacterial Reverse Mutation Test. e bacterial reverse mutation test was conducted following the OECD Test Guideline 471 [16]. e mutagenic effect of GMCT was Journal of Toxicology assessed in S. typhimurium TA98, TA100, TA1535, TA1537 strains and Escherichia coli WP2 uvrA (pKM101) in the presence or absence of metabolic activation (S9 fraction) [17,18]. In brief, a hundred microliters of fresh broth of each bacterial strain was mixed with increasing concentrations of the test product. e bacterial suspensions were mixed with an overlay agar and plated over a minimal glucose agar plate. e plates were incubated at 37°C for 48-72 hr. e final concentrations of GMCT were 100, 266, 707, 1880, and 5000 μg per plate. e vehicle control culture plates received sterile normal saline mixed with the bacterial culture broth. Each dose of GMCT was tested in three replicate plates. e number of revertant colonies was counted manually in the test plates. In parallel, the cultures grown in the presence or absence of the strain-specific mutagens, 2-aminoanthracene or 2-nitrofluorence or sodium azide or 9-aminoacridine or 4-nitroquinoline-1-oxide, were positive controls. A test concentration was considered as mutagenic only when the number of revertant colonies was at least double the vehicle control (spontaneous yield). In Vivo Micronucleus Assay. Mammalian erythrocyte micronucleus assay was conducted in healthy Swiss albino mice (6-8 weeks old), following the OECD Test Guideline 474 [19]. Male and female mice were divided into five groups (n = 10; five males and five females). e animals received GMCT through oral gavage at doses of 500 (G2), 1000 (G3), or 2000 mg/kg BW (G4) mixed in 0.5% CMC-Na. e vehicle control group (G1) animals received 0.5% w/v CMC-Na through oral gavage. e test item and the vehicle were administered two times at an interval of 24 hrs. e positive control animals (G5) received cyclophosphamide monohydrate (40 mg/kg) through intraperitoneal (i.p.) injection only once on the second day, 24 hr before euthanasia. All animals were examined for clinical signs. Following CO 2 euthanasia, the bone marrow samples were aspirated from both femurs and smeared on randomly coded glass microscope slides. e Giemsa-stained bone marrow samples were examined under 40x objective of a light microscope (ECLIPSE E200, Nikon Corporation, Tokyo, Japan). Polychromatic erythrocytes (PCE) and normochromic erythrocytes (NCE) were counted in each bone marrow sample. A total of 4000 polychromatic erythrocytes (PCEs) were scored in each sample. e frequency of micronucleated polychromatic erythrocytes (MNPCEs) was expressed as a percentage. Besides, the number of PCE was counted in 1000 total number of erythrocytes (TE = PCE + NCE), and a ratio between PCE and TE represented the frequency of PCE. Statistical Analyses. e data were expressed as mean ± SD. Comparison analyses among different groups were performed by analysis of variance using the one-way analysis of variance (ANOVA) test with Dunnett's post hoc test. Student's t-test was used to analyze the data from the reversal groups for comparison. All comparisons were evaluated at the 95% level of confidence, and P values less than 0.05 were considered statistically significant. A Single-Dose Oral Administration of GMCT Did Not Cause Abnormality or Death in Wistar Rats. No morbidity or mortality occurred following a single-dose oral administration of 2000 mg/kg BW GMCT in the female Wistar rats. e rats were generally active and did not show any visible signs of toxicity. ey did not show abnormal changes in their body weights, feed, and water consumption during the postdose 14-day observation period. A gross pathological examination did not show any abnormalities. Together, these observations indicate that the median oral lethal dose (LD 50 ) of GMCT is at least 2000 mg/kg BW in female Wistar rats. GMCT Did Not Show Signs of Toxicity or Irritation in the Skin or Eyes. e GMCT-treated male and female Wistar rats did not show any signs of dermal toxicity, adverse pharmacological effects, or any behavioral abnormality. ese results suggest that that the dermal LD 50 of the test item is at least 2,000 mg/kg BW in Wistar rats of either sex. GMCT did not show any skin reactions in the New Zealand rabbits in the acute dermal irritation test. e observations suggest that the acute skin irritation index is 0 and conclude that GMCT is nonirritant to the rabbit skin [20]. In the eye irritation test, application of GMCT did not show signs of severe irritation in the experimental animals. e iris was normal throughout the observation period. ere were no signs of ulceration or opacity in the cornea. Conjunctival redness and mild swelling or chemosis were observed in the test eye of the experimental rabbits; however, these signs were reversible, and the eyes became normal within 24 hr of application. Based on the Harmonized Integrated Classification System, GMCT is nonirritant to the eyes [20]. GMCT Supplementation Was Nontoxic and Did Not Yield Pharmacologic Effects in the 28-Day Repeated Dose Oral Toxicity Study. Oral administration of GMCT for 28 days did not attribute any sign of toxicity and mortality in male and female Wistar rats. e rats in the primary groups, including the reversal groups, were generally healthy and active. All animals in the primary and reversal groups survived through the end of the study. Tables 1 and 2 present the total and weekly feed consumption by the male and female rats in the primary and reversal groups. Among the primary groups, the male high-dose group (G4 male) consumed significantly less feed in the first week of the study, compared with the control (G1 male) rats. e G4R males had significantly less feed during the first three weeks and the final week of the study. e G4R female rats also consumed substantially less feed in the first week of the study, compared with the respective control rats (G1R female). However, the total feed consumption by the treated rats was not significantly different compared with the matched controls irrespective of the sex of animals or treatments. e changes in body weights of male and female rats following the test item supplementation are presented in Tables 3 and 4, respectively. GMCT-supplemented male and female rats in the primary groups including the reversal groups did not show treatment-related changes in their overall body weight when compared with the vehicle control animals. All groups of GMCT-supplemented male and female rats showed a natural pattern of body weight gain as observed in the vehicle control rats during the study (Tables 3 and 4). At necropsy, the vital organs of the experimental rats were weighed. e absolute and the relative organ weights (expressed as a percentage of their body weights) of the male and female rats are shown in Tables 5 and 6, respectively. e absolute and relative organ weights of the GMCT-supplemented male and female rats were not statistically different when compared to the sex-matched controls, except a significant increase in total but not in the relative weight of the spleen in G4R females. No treatment-related significant changes were observed in the hematology parameters of GMCT-supplemented male and female rats (Tables 7 and 8), with an exception that the leucocyte count significantly increased in the G4 female rats, compared with the control rats (Table 8). e serum biochemistry parameters of the male and female experimental rats are presented in Tables 9 and 10, respectively. e mid dose (G3) male rats showed a significantly reduced level of total protein, compared with the control (G1 male). e other biochemical parameters did not show treatment-related changes (Table 9). Overall, in the females, the majority of the biochemical parameters were not significantly different from the controls, with the exceptions of variations in AST, ALT, total cholesterol, and glucose (Table 10). e G2 and G3 female rats showed an increased AST level; and the G4 female rats showed an increased ALT level, compared with the control (G1 female) rats. Serum glucose and total cholesterol were significantly increased in the G4R female rats when compared with the G1R rats. No such treatment-related changes were observed in the male rats of primary or reversal groups of the study. However, the observed levels of these biochemical parameters are within the normal ranges. e male and female rats did not show treatment-related changes in the kidney function parameters (creatinine and BUN). GMCT supplementation did not affect the major serum electrolytes, calcium, magnesium, sodium, and potassium. GMCT Supplementation Did Not Cause Significant Macroscopic or Microscopic Changes in the Vital Organs of the Male and Female Rats. Daily administration of 1000 mg/ kg BW GMCT did not cause substantial changes in the vital organs of the experimental male and female rats. e gross morphology of the vital organs of the high-dose male and female rats was unaltered following the treatment and reversal period. Tables 11 and 12 summarize the histological observations on the vital organs of the male and female animals, respectively. e GMCT-supplemented rats showed no significant histological changes Data are present as mean ± SD; n � 5. * and # Significance p < 0.05 vs. G1 and G1R, respectively. Journal of Toxicology compared with the control rats. With an exception, one rat in both sexes showed mild traces of mineralization in the kidneys following the test item supplementation. Overall, the treatment-related effects on the vital organs were similar to those of the control rats. e microscopic examinations did not show significant changes in the hematoxylin-eosin-stained sections of the vital organs of the high-dose-supplemented rats compared to the control rats (Figure 1). GMCT Is Neither Mutagenic nor Clastogenic In Vitro and In Vivo. e Ames bacterial reverse mutation assay showed that the increasing concentrations of GMCT did not alter the number of revertant colonies of S. typhimurium and E. coli strains in the presence or absence of the S9 metabolic activation (Table 13). e strain-specific mutagens, served as the positive controls, significantly increased the number of revertant colonies in the culture plates as indicated (Table 13). Data are present as mean ± SD; n � 5. * p < 0.05 vs. G1. $ e mean ± SD of the historical normal values (%) of large unstained cells (LUC) is 1.20 ± 0.6 (min. 0.3%; max. 4.0%). Table 9: Effect of oral administration of GMCT on biochemical parameters in male Wistar rats. Parameters Primary groups Dose (mg/kg/day) In the micronucleus assay, GMCT supplementation did not increase the ratio of the polychromatic erythrocyte (PCE) and the total erythrocyte (TE) in the bone marrow samples of either male or female mice, in comparison with the vehicle control animals. Besides, GMCT administration also did not increase the frequency of the micronucleated PCE (MNPCE) in male or female mice. e positive control groups or the cyclophosphamide-treated mice showed a significant increase in the incidence of MNPCE in their bone marrow samples (Table 14). Discussion e current study evaluated the safety and toxicological profile of GMCT (CinDura ® ), a proprietary composition of Garcinia mangostana fruit rind and Cinnamomum tamala leaf extracts. Individually, these two plant raw materials have precious medicinal value in complementary and alternative medicine since centuries [3,5,6]. Besides, these plant materials are considered safe based on the long history of human consumption either as a medicine or food ingredient. In the majority, the usage history of the traditional medicinal herbs determines their safety. Following the OECD guidelines, we assessed the toxicity of the herbal blend in an acute oral, acute dermal toxicity/irritation, and a subacute twentyeight-day repeated dose oral toxicity studies in Wistar rats. A primary eye irritation study in rabbits evaluated the irritant or corrosive effects of GMCT. Besides, the bacterial reverse mutation assay and a micronucleus assay in mouse bone marrow erythrocytes evaluated the genetic toxicity of this herbal composition. GMCT (CinDura ® ) is a proprietary combination of aqueous ethanol extracts of the G. mangostana fruit rind and C. tamala leaf at 1 : 2 ratio. is herbal blend is standardized to contain at least 3.5% α-mangostin and 0.1% rutin [1]. A double-blind, placebo-controlled clinical trial demonstrated that 800 mg/day of GMCT supplementation significantly improved muscle strength, muscle growth, and endurance performance in the young male participants in conjunction with a resistance training schedule of six weeks [1]. Besides, a series of in vitro experiments demonstrated that GMCT activated endothelial nitric oxide synthase in human endothelial cells and improved mitochondrial biogenesis in rat skeletal myoblasts. Moreover, this composition also activated mTOR signaling in the rat myoblasts in vitro (to be published elsewhere). In the present study, we first determined that the oral LD 50 of GMCT was 2000 mg/kg BW of Wistar rats, the highest tested dose. is herbal composition did not show any acute toxicity or mortality or any abnormalities in hematobiochemical parameters; the gross pathology of the vital organs was also unaltered. According to the toxicological classification criteria of the OECD, the observations suggest that the test item is a nontoxic composition and falls in the "no label" category [21]. Further, GMCT did not show any signs of dermal irritation or toxicity. Also, based on the "Harmonized Integrated Classification System for Human Health and Environmental Hazards of Chemical Substances and Mixtures," the herbal blend is nonirritant to the eyes. (e) (f ) (g) (h) Figure 1: Photomicrographs showing representative hematoxylin-eosin-stained (100x) sections of the liver, kidney, heart, and brain of the control and 1000 mg/kg GMCT-supplemented rats in the 28-day toxicity study. e left panels (a), (c), (e), and (g) present the liver, kidney, heart, and brain of the control rats, respectively. e right panels (b), (d), (f ), and (h) present the liver, kidney, heart, and brain of the GMCT-1000-supplemented rats, respectively. ere are no treatment-related microscopic changes in the major tissues of GMCT-supplemented rats compared to the control rats. 12 Journal of Toxicology In the twenty-eight-day subacute toxicity study followed by the fourteen-day reversal period, the highest dose of 1000 mg/kg/day of GMCT-supplemented rats did not show visible signs of toxicity. No animal in the primary or the reversal groups died during the study. e animals did not show treatment-related changes in body weight, cumulative feed consumptions, gross anatomy, organ weights, or histopathology. e toxic chemicals reduce body weight and cause associated changes in the absolute and relative organ weights [22]. In the present subacute study, all animals in the treatment groups gradually gained body weights throughout the study. Although some evaluation points showed that the G2, G4, and G4R rats of both sexes consumed significantly less feed in comparison with the respective controls, those were neither consistent nor dose dependent. e amount of feed consumed by these groups did not correlate with their body weight data; also, the total feed consumption in these groups was not significantly different compared with the respective controls. Together, these observations suggest that the oral administration of GMCT did not affect the natural growth and metabolism of the experimental rats. Hematology and serum biochemical parameters are essential for understanding the overall physiological status of the body, investigating a disease condition, diagnosis, and liver toxicity [23,24]. In the present study, the GMCTsupplemented rats (in the primary and reversal groups) did not show significant changes in the hematobiochemical parameters, compared with the respective control rats with exceptions in some parameters, most importantly AST and ALT in female rats in the primary groups of the study. However, these hematology and clinical chemistry values are within the normal physiological ranges related to the sex and age of Wistar rats [25]. ese changes are neither related to the dose nor the duration of the treatment. Elevated levels beyond the typical ranges of the liver transaminases indicate the abnormal liver function or the liver injury caused by toxic chemical exposure [26]. C. tamala leaves extracts are potent antioxidants and are hepatoprotective against chemical-induced toxicity in rodents [8,27]. Recently, Fu et al. reported that alpha-mangostin from the G. mangostana fruit rind extract activated antioxidant defense and parallelly induced anti-inflammatory response to protect lipopolysaccharide/d-galactosamine-induced liver toxicity in mice [28]. Kidneys eliminate urea and creatinine from the body through urine. Elevated levels of these metabolites in circulation indicate lack of clearance from the body due to impaired kidney function [29]. In the present study, oral administration of GMCT did not affect the serum urea and creatinine levels in the male and female rats, indicating no treatment-related adverse effect on the kidneys of the experimental rats. Overall, the observations on the hematobiochemical parameters suggest that oral administration of the maximum tested dose of the herbal blend (1000 mg/ kg/day) did not yield pathologic changes in the vital organs of the treated rats. Further, the gross and microscopic examinations of the vital organs revealed no treatment-related changes in the rats. ese observations also support that oral administration of this herbal composition did not cause systemic toxicity in the male and female rats. Taken together, the results of the twenty-eight-day repeated oral toxicity study establish that the "no-observed-adverse-effect level (NOAEL)" of GMCT (CinDura ® ) in the male and female rats is 1000 mg/kg body weight per day, the highest tested dose. is dose is equivalent to 162.07 mg/kg or a daily dose of 9604 mg for a human subject of 60 kg body weight, at least twelve-fold higher than the dose used in the clinical study [1]. Besides, a bacterial reverse mutation assay and a micronucleus assay in mouse bone marrow erythrocytes evaluated the genotoxic potential of GMCT. e toxicological data of genotoxicity assays are essential components of the safety evaluation of a drug candidate [30]. In the present study, the negative results from the Ames test and the in vivo micronucleus assay suggested that the test item did not induce mutagenesis and also did not promote clastogenesis or DNA damage. Hence, GMCT consumption is not of genotoxicity concern. Together, based on the results of the battery of toxicity studies, we conclude that the oral use of GMCT (LI80020F4 or CinDura ® ) is expected to be safe for humans. ALT: Alanine aminotransferase AST: Aspartate aminotransferase BW: Body weight CPCSEA : Committee for the purpose of control and supervision of experiments on animals HPLC: High-performance liquid chromatography LD50: Median lethal dose MNPCE : Micronucleated polychromatic erythrocytes NOAEL: No-observed-adverse-effect level Data Availability e data used to support the findings of this study are available upon request. Conflicts of Interest e authors are employees of Laila Nutraceuticals R&D Center, Vijayawada, India, and have conflicts of interest for the research, authorship, and publication of this article.
2020-08-06T09:01:33.415Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "3b00eeca8ce6c4062105dd67555950abbfaa67b1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/1435891", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35d1aa4c944ba9c7e0028ea62a02b750ac4ccbc5", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
3216742
pes2o/s2orc
v3-fos-license
Right iliac vein thrombosis mimicking acute appendicitis in pregnancy: a case report Background Right iliac vein thrombosis is uncommon in pregnancy. Nonetheless, when it does occur, its presentation could be very unspecific with important diagnostic challenges and this could have negative therapeutic consequences especially in a resource limited setting. Case presentation The historical, clinical and laboratory data of a 30 year old G2P1001 woman of African ethnicity at 11 weeks of gestation pointed towards a right iliac vein thrombosis missed for an acute appendicitis with subsequent appendectomy and failure to cure. Following the diagnosis of right iliac vein thrombosis post-appendectomy, the patient was started on low molecular weight heparin and the clinical progress thereafter was favourable. Conclusion Pelvic vein thrombosis should be considered a differential diagnosis of intractable lower abdominal pain in early pregnancy. A high index of suspicion could lead to early diagnosis, prompt management and a favourable prognosis even in a low-income setting. Background Pregnancy is a hypercoagulable state, during which the risk of both arterial and venous thromboembolism is increased although venous thromboembolism (VTE) predominates [1]. This hypercoagulable state results from a combination of altered coagulation factors, stasis and vascular damage [1]. VTE is a well known cause of maternal mortality worldwide and is the leading cause of maternal mortality in the United Kingdom [2]. However there is not much data on the burden of VTE amongst pregnant women in low and middle income countries. Deep venous thrombosis (DVT) accounts for about 85% of cases of VTE in pregnancy [3], with 2/3 of these cases shown to occur during the antepartum period [4]. DVT in pregnancy is more likely to be proximal with about 90% of cases occurring in the right side. Pelvic vein thrombosis (PVT) is uncommon outside pregnancy but accounts for 10% of pregnancy-related DVT [1]. PVT presents with symptoms similar to those of acute appendicitis such as; fever, abdominal pains, nausea and vomiting [5][6][7]. This could raise serious diagnostic and management dilemmas with negative therapeutic consequences; especially in resource-poor settings where the appropriate diagnostic and management arsenal is not always available. We report the case of a patient with a non specific picture of PVT missed for and managed as acute appendicitis with subsequent development of severe bilateral lower limb DVT in a 1st trimester pregnancy. Case presentation A 30 year old G2P1001 sub-Saharan African female teacher at 11 weeks amenorrhea, presented to the Nkwen Baptist Health Center (Bamenda, North West Region of Cameroon) on the 15th of May 2016 with bilateral lower limb swelling and pain of 5 days duration. She had no known chronic illness and denied having a family history of VTE. She reported being well till 2 weeks prior to presentation when she started experiencing abdominal pains; the pain was mainly in her lower abdomen, dull in nature, BMC Research Notes *Correspondence: arokedess@hotmail.com 1 Nkwen Baptist Health Center, Bamenda, Cameroon Full list of author information is available at the end of the article non-radiating, mild in intensity, was initially intermittent then became constant. It was associated with intermittent low grade fever. This prompted her to consult at a remote health center, where a urinalysis and malaria parasite test was done but their results were inconclusive. She was then cautioned to be having early pregnancy symptoms and placed on acetaminophen 3 g per day in three divided doses which she took for a week with no regression of symptoms. The persistent and progressively worsening pain now localized at the right lower quadrant prompted a second consultation at another health facility. This pain was still associated with low grade fever and now included; loss of appetite and intermittent postprandial vomiting. The attending physician on examination remarked right iliac fossa tenderness and rebound tenderness with a positive Rovsing's sign. Presumptive diagnosis of acute appendicitis and differential of ovarian cyst in pregnancy were retained. An emergency surgery was booked. However, intra-operative findings revealed a normal appendix and ovaries. Following surgery, lower abdominal pains persisted and she complained of a sudden onset of crampy constant pains in her right thigh. She was told to be having post surgery pain, for which she was then given analgesics. On day 3 post hospitalization she was discharged on analgesics, antibiotics and progesterone suppository. While at home, the pains persisted and 2 days later involved her left calf area. This was associated with bilateral lower limb swelling that was more on the right lower limb. The pain increased in severity making it difficult for her to walk. This prompted consultation at our health facility. On arrival she was ill-looking and in painful distress. Her blood pressure was 122/76 mmHg, heart rate 94 beats/ min, respiratory rate of 22 breaths/min, temperature 37 °C, O 2 saturation at 97% and weight 58 kg. Her conjunctivae were pink and sclera anicteric, heart sounds were normal and lung fields clear. On examining the abdomen, a clean midline incision was seen and there was tenderness on deep palpation of the lower abdominal quadrants marked on the right. There was bilateral lower limb pitting oedema extending to the thighs with right lower limb more swollen than left. The limbs were mildly erythematous but there was no area of cracked skin or wound on both limbs that could have served as portal of entry for skin infection. Both lower limbs were warm tender. Based on these we made a tentative diagnosis of bilateral lower limb deep venous thrombosis in early pregnancy with a possible pelvic vein thrombosis that was misdiagnosed for acute appendicitis. Our health facility was not equipped with the necessary tools and personnel to confirm our diagnosis and manage the patient. She was therefore referred to a tertiary care center about 40 km from our facility. At the tertiary center compressive doppler ultrasound of the pelvis and lower limbs revealed pelvic and bilateral lower extremity veins seen with echoes in the right common iliac vein (Fig. 1), right femoral vein, left femoral vein and left popliteal vein. There was decreased colour flow in these veins and decreased compressibility. These suggested DVT of the right common iliac vein, right femoral vein, left femoral vein and left popliteal vein and thus confirmed our diagnosis of bilateral lower limb and pelvic DVT. Further laboratory testing showed the following: normal white cell count of 8100/µl, mild anaemia with haemoglobin of 9.8 g/dl, thrombocytosis of 532,000/ µl, normal kidney function test (serum creatinine of 0.64 mg/dl and urea of 12.7 mg/dl), glycaemia of 85.9 mg/dl and normal serum electrolytes of: (Sodium 134 mmol/l, Potassium of 4.17 mmol/l and Chloride of 103 mmol/l). Cardiac echography and electrocardiogram done were all normal. The patient was immediately started on low molecular weight heparin (LMWH) 80 mg subcutaneous route daily. After 5 days of treatment the patient's symptoms had subsided and she was discharge and counter referred for continuation of care. We continued her daily LMWH injections and scheduled her for a repeat of the pelvic and lower limb ultrasound. Six weeks later there were no more echoes in the pelvic and lower limb veins (Fig. 2). She continued daily LMWH till 12 weeks postpartum. Conclusion Pregnancy generally increases the risk of VTE by fivefolds [8]. Unlike in the developed world where VTE in pregnancy is the most common cause of maternal mortality [9], postpartum haemorrhage and preeclampsia are the most common causes of maternal mortality in developing countries. VTEs are thus not routinely considered as a priority differential in most African countries by physicians when approaching pregnant women with acute abdominal pain. Consequently, VTEs in pregnancy in low and middle income countries are more likely to be Fig. 1 Echography showing the thrombus present as echoic image inside the right common iliac vein missed [10]. The epidemiological and clinical burdens of VTEs in pregnancy in the African context may be grossly underestimated because of under/misdiagnosis and possibly an associated high case fatality rate particularly in sub-urban areas. A particularity in the case presented is that though pregnancy increases risk of DVT, its occurrence in the 1st trimester of pregnancy is relatively low [4]. Thus, making a diagnosis of PVT especially in early pregnancy will require a high index of suspicion. Pelvic vein thrombosis is uncommon outside pregnancy and accounts for 10% of pregnancy related DVTs [8]. Despite this, studies have shown that approximate 60% of proximal DVTs are limited to the femoral and iliac veins [11]. PVT may thus be more frequent than suspected. PVT typically presents with symptoms similar to those of appendicitis such as; fever, abdominal pains, nausea and vomiting [5,6]. These two conditions could therefore often be confused when judging only by clinical presentation. The clinical presentation of PVT is similar to that of acute appendicitis. The latter being a frequent cause of acute abdominal pain, is a notoriously preferred aetiological diagnosis in most cases of right iliac fossa pain especially in poor settings. Furthermore, it is difficult to differentiate between PVT and other pelvic pathologies such as torsion of ovarian cyst, pelvic inflammatory diseases, ureteral colic, enteritis or an ectopic pregnancy solely on clinical basis. Therefore clinical vigilance and the ability to rule out other differential diagnoses are essential as untreated DVT can have devastating consequences and could even be lethal. Pulmonary embolism (PE) is a well known fatal complication of DVT. Placement of an inferior vena cava filter is usually necessary in preventing this complication in patients who are not candidates for anticoagulation or who have had a previous PE while on therapeutic anticoagulants [12]. However, our patient did not have features suggestive of a PE. Early diagnosis of DVT is therefore necessary for prompt treatment and to prevent development of these complications and sequela. Diagnosis of DVT can be made using ultrasonography, computed tomography, magnetic resonance imaging or venography. Magnetic venography is the method of choice for the diagnosis of PVT [1]. In resource poor settings where these equipments are not readily available, diagnosis is usually done by ultrasound as exemplified in the case presented. However, studies have shown that in pregnancy there is a fair agreement between ultrasound and magnetic resonance imaging for determination of the extend of DVT into pelvic veins [13]. Pelvic vein thrombosis is usually managed conservatively with anticoagulant therapy and in rare cases surgically. Favourable outcome with aggressive surgical therapy has been reported in a 35 weeks pregnancy with iliofemoral-popliteal DVT in Germany [14]. Intravenous antibiotics can also be used in case of sepsis. Treatment in pregnancy requires consideration of the twin issues of safety for the fetus and the mother. The standard treatment for VTE outside of pregnancy is LMWH in the acute phase for at least 5 days associated with warfarin, which is then continued for 3-6 months [15]. Warfarin is avoided in pregnancy as it crosses the placenta [9]. LMWH is at least as effective and safe as un-fractionated heparin for treatment of VTE and the effects more predictable with no requirement for routine monitoring [16]. In this case there was no evidence of infection so the patient was treated conservatively with daily LMWH injections and the outcome thereafter was favourable. The case presented confirms that PVT has a similar presentation to and is often confounded with acute appendicitis. PVT should thus be considered in any pregnant woman presenting with unexplained lower abdominal pains with a clinical picture similar to that of acute appendicitis. Clinicians should have a high index of suspicion in order to make a proper and early diagnosis so as to optimize the prognosis. Authors' contributions DA participated in the management of the patient and wrote the manuscript. BMK, CAD, NNB and LTN read and edited the manuscript. SPC made a critical review of the manuscript, provided intellectual guidance and approved the final manuscript. All authors read and approved the final manuscript.
2017-08-03T02:30:46.069Z
2017-01-03T00:00:00.000
{ "year": 2017, "sha1": "1169a489d63eea7af77caf8d3dd6e634dddde5dd", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-016-2351-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49b8df12a480a92140d860278992833d1de27c87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1325556
pes2o/s2orc
v3-fos-license
Understanding Acid-Base Disorders. The accurate interpretation of laboratory tests in patients with acid-base disorders is critical for understanding pathophysiology, making a diagnosis, planning effective treatment and monitoring progress. This is an important topic particularly for junior medical staff who may encounter acidbase problems outside normal working hours when patients become acutely unwell. These clinical situations may be a source of confusion particularly because of the variety of terms used to describe and classify acid-base disorders. In this article, we aim to provide the reader with an overview of the key concepts necessary for developing a good working understanding of acid-base disorders that commonly present in clinical medicine. We start with some acid-base disorder definitions and then provide a series of case vignettes to illustrate the key points. INTRODUCTION The accurate interpretation of laboratory tests in patients with acid-base disorders is critical for understanding pathophysiology, making a diagnosis, planning effective treatment and monitoring progress. This is an important topic particularly for junior medical staff who may encounter acidbase problems outside normal working hours when patients become acutely unwell. These clinical situations may be a source of confusion particularly because of the variety of terms used to describe and classify acid-base disorders. In this article, we aim to provide the reader with an overview of the key concepts necessary for developing a good working understanding of acid-base disorders that commonly present in clinical medicine. We start with some acid-base disorder definitions and then provide a series of case vignettes to illustrate the key points. DEFINITIONS Acidaemia An arterial pH below the normal range (pH<7.35). Alkalaemia An arterial pH above the normal range (pH>7.45). Acidosis A process lowering pH. This may be caused by a fall in serum bicarbonate and/or a rise in the partial pressure of carbon dioxide (PaCO 2 ). Alkalosis A process raising pH. This may be caused by a rise in serum bicarbonate and/or a fall in PaCO 2 . ACID-BASE HOMEOSTASIS Like temperature, blood pressure, osmolality and many other physiological parameters, the human body strives to keep its acid-base balance within tightly controlled limits. It is not the aim of this article to review in detail the physiology of acid-base homeostasis, but to provide a working knowledge of some key concepts that will help in the interpretation of results encountered commonly in clinical practice. More detailed free text reviews of acid-base homeostasis are available [1][2][3][4][5] . A buffer is a solution that resists a change in pH. There are many different buffer systems in the body, but the key one for understanding most acid-base disorders is the bicarbonate system present in the extracellular fluid. Like any buffer, this system comprises a weak acid (in this case carbonic acid, H 2 CO 3 ) and its conjugate base (the bicarbonate ion, HCO 3 -), which exist in a dynamic equilibrium as shown in Equation 1 6 : The acidity of a solution is governed by the concentration of hydrogen ions (H + ) present. If a disease process results in an increase in the concentration of hydrogen ions, one would expect the body to become more acidic. However, the bicarbonate buffer system resists this change because the excess of hydrogen ions drives the reaction in Equation 1 to the right: hydrogen ions react with and "consume" bicarbonate ions and any change in acidity is minimised. This process requires an adequate supply of bicarbonate ions. The kidneys are vital organs in acid-base balance as they can both generate "new" bicarbonate buffer and reclaim filtered bicarbonate in the proximal tubules ( Figure 1). By rearranging and simplifying the above acid-base reaction, it is possible to derive the useful relationship shown in Equation 2 Equation 2 helps to illustrate how the body's hydrogen ion concentration can be regulated by altering the ratio of CO 2 to bicarbonate. Ventilation controls the PaCO 2 level and the kidneys regulate the bicarbonate level ( Figure 1). This makes it easy to see that the concentration of hydrogen ions increases in two settings: an increase in PaCO 2 or a reduction in plasma bicarbonate. One of the functions of ventilation is the elimination of CO 2 during exhalation. If a patient is tachypnoeic, they will tend to lose CO 2 , while patients with a reduced respiratory drive will retain CO 2 . An increased concentration of hydrogen ions (an acidosis) stimulates the respiratory centre to increase the rate of breathing (exhaling more CO 2 ). This mechanism is another key physiological response that helps to maintain acid-base balance. Acid-base disorders are broadly classified into problems involving metabolic and/or respiratory processes. Metabolic processes primarily direct change in the level of bicarbonate and respiratory processes primarily direct changes in PaCO 2 ( Figure 2). The body adapts, or compensates where there is an acidbase disturbance in an attempt to maintain homeostasis 7 . If the primary acid-base problem is metabolic, then the compensatory mechanism is respiratory. The respiratory rate is altered, usually within minutes, in an attempt to keep the hydrogen ion concentration normal. If the primary acid-base problem is respiratory, then the kidneys adapt to counteract the change by altering their handling of hydrogen ions. This process in the kidneys usually takes place over several days. CAUSES OF ACID-BASE DISORDERS Acid-base disorders are classified according to whether there is acidosis or alkalosis present (see pH section for details), and whether the primary problem is metabolic or respiratory ( Figure 2). Bear in mind that there may be more than one problem occurring simultaneously and that the body may be compensating for the derangement. Remember, metabolic processes primarily direct changes in bicarbonate and respiratory processes primarily direct changes in PaCO 2 ( Figure 2). MEASURED AND DERIVED INDICES Some potentially confusing terminology is often used when discussing acid-base disorders. These terms include PaCO 2 , total bicarbonate, total CO 2 , standard bicarbonate and base excess 8 . It is useful to know what these terms mean and how they are derived. Most blood gas analysis is carried out on point-of-care blood gas analysers, and these generally only measure two substances when it comes to acid-base reports: hydrogen ions (from which pH is calculated -see below) and PaCO 2 . The 'bicarbonate' results that are given from such analysers are generally calculated using Equation 2. Most laboratories measure total CO 2 concentration as part of the standard electrolyte profile. The reason behind this is that it is technically difficult to measure bicarbonate ions in isolation, but relatively straightforward to measure total CO 2 . Total CO 2 represents the total amount of bicarbonate ions, dissolved CO 2 and other CO 2 -containing substances in a solution. Since bicarbonate normally constitutes the majority of this, total CO 2 is normally used as a convenient surrogate measure of bicarbonate. The total CO 2 on the electrolyte profile may provide the first clue to the presence of an acid-base disturbance in a patient and should not be overlooked when reviewing electrolyte results. One cannot, however, diagnose acid-base disturbances from an isolated total CO 2 measurement. In order to characterise an acid-base disturbance, measures of pH, PaCO 2 , total CO 2 or bicarbonate are required, as well as measurement of the anion gap. Standard bicarbonate is a calculated index that attempts to provide information on what the bicarbonate concentration would be if the respiratory components of the disorder were eliminated. Base excess is another calculated index which will be elevated in the setting of metabolic alkalosis and reduced in metabolic acidosis. We will not consider the use of these calculated indices further in this article. UNDERSTANDING ACID-BASE DISORDERS -A FOUR STEP APPROACH In order to understand the nature of an acid-base problem, we recommend a structured approach during which the following four questions should be asked. Question 1: What is the pH? The first step in interpreting an acid-base problem is to look at the pH (or [H + ]) and decide if you are dealing with acidosis, alkalosis or normality. The concept of pH as a measure of acidity will already be familiar. Because the body compensates for acid-base disorders, it is possible that a disorder might be present even if the pH is normal. It should also be borne in mind that the body never over-compensates. Question 2: What is the bicarbonate? The second step in interpreting an acid-base disorder is to consider the bicarbonate concentration relative to the normal reference range (which will vary from laboratory to laboratory, but is typically in the range 22-29 mmol/L). A reduced bicarbonate concentration could mean that the body's main buffer is being used up buffering excess acid (hydrogen ion) production e.g. in lactic acidosis or ketoacidosis. Alternatively the reduced bicarbonate concentration could indicate a problem related to loss of bicarbonate from the gastrointestinal tract e.g. diarrhoea or a kidney problem i.e. failure to generate new bicarbonate or reclaim bicarbonate filtered into the renal tubules. A reduced bicarbonate concentration is a hallmark of metabolic acidosis. An increased bicarbonate concentration may indicate that there have been substantial losses of acidic fluid e.g. loss of gastric fluid from persistent vomiting or prolonged nasogastric aspiration. Alternatively an increased bicarbonate concentration may be a chronic adaptation by the kidney to high PaCO 2 levels in persons with chronic respiratory diseases associated with CO 2 retention (see Equation 1 where elevated CO 2 levels drives the equation to the left producing more bicarbonate). An elevated bicarbonate concentration is a feature of metabolic alkalosis. Question 3: What is the PaCO 2 ? The third step in assessing an acid-base problem is to measure the PaCO 2 . This is helpful in determining whether the respiratory system is responding normally to an acid load and reducing the PaCO 2 to compensate for an acidosis i.e. the primary acid-base disturbance is a metabolic acidosis and this is compensated by an increased respiratory rate resulting in a secondary respiratory alkalosis. A decreased PaCO 2 is a feature of respiratory alkalosis. Alternatively, if there is a primary respiratory problem, e.g. respiratory failure associated with chronic obstructive pulmonary disease, the retained CO 2 results in an elevated PaCO 2 (and will drive Equation 1 to the left) and produce a respiratory acidosis. It is also possible to develop a respiratory acidosis if drugs, such as opiate analgesics, depress the respiratory centre resulting in a critical reduction in the rate of ventilation resulting in CO 2 retention. An elevated PaCO 2 is a feature of respiratory acidosis. One can see that by examining the pH, bicarbonate and PaCO 2 it is possible to deduce the nature of the primary acid-base disorder present and the compensatory response. Question 4: What is the anion gap? UMJ is an open access publication of the Ulster Medical Society (http://www.ums.ac.uk). The Ulster Medical Society grants to all users on the basis of a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Licence the right to alter or build upon the work non-commercially, as long as the author is credited and the new creation is licensed under identical terms. The final step in assessing an acid-base disorder is to calculate the anion gap. Bodily fluids are electrically neutral, meaning that the number of positive charges (cations) present equals the number of negative charges (anions). The most abundant anions are chloride and bicarbonate; numerous other anions are not routinely quantitated, for example proteins and sulphate ions. Sodium is by far the most abundant plasma cation; other cations present in much lower quantities include potassium, calcium and magnesium. If it were feasible to measure all charged substances in blood, it could be shown that the sum of the positively charged particles is exactly balanced by the number of those substances carrying negative charges. It is routine practice to measure only four charged particles: sodium, potassium, chloride and bicarbonate ions. As discussed earlier, total CO 2 on the electrolyte profile may be considered as a convenient surrogate measure of bicarbonate and can be used in the calculation of the anion gap. When the numbers of cations (sodium and potassium) are added, one will always find that they outnumber the anions (chloride and bicarbonate). This difference is what is meant by the term 'anion gap 9 ' and reflects the unmeasured anions in Equation 4 Since the extracellular fluid potassium concentration is very much lower than the sodium, chloride or bicarbonate concentrations and because it can only vary by a few mmol/L, it is often ignored making the anion gap calculation simpler as shown in Equation 5: Equation 5 The reference interval (normal range) for anion gap varies from laboratory to laboratory, and is inherently imprecise because of the number of measurements required for its calculation. An anion gap greater than 20 mmol/L is always considered to be abnormally elevated and a gap of less than 10 mmol/L abnormally low. There is some debate in the literature about the significance of anion gaps in the range 10-20 mmol/L, but a pragmatic approach would be to actively seek out causes of a high anion gap in patients with gaps exceeding 14 mmol/L (or 18 mmol/L if potassium is included in the equation. Consider the following normal electrolyte profile: Na + 136 mmol/L, K + 4.0 mmol/L, Cl -100 mmol/L, HCO 3 -(or total CO 2 ) 25 mmol/L. The anion gap is calculated as 140 -100 -25 = 11 mmol/L (or 15 mmol/L if potassium is included in the calculation). The Anion Gap is illustrated in Figure 3a. Calculation of the anion gap is particularly useful in cases of metabolic acidosis since it can help in formulating a differential diagnosis 10 . There are two main categories of metabolic acidosis: high anion gap metabolic acidosis (HAGMA) and normal anion gap metabolic acidosis (NAGMA). A HAGMA is illustrated in Figure 3b. Common causes of HAGMA and NAGMA are detailed in Table 2. Table 2 Causes of metabolic acidosis (common causes are in bold) Table 3 Mnemonics for high anion gap metabolic acidosis UMJ is an open access publication of the Ulster Medical Society (http://www.ums.ac.uk). The Ulster Medical Society grants to all users on the basis of a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Licence the right to alter or build upon the work non-commercially, as long as the author is credited and the new creation is licensed under identical terms. Several mnemonics for common causes of HAGMA have been developed 11 , and some of the more useful examples are included in Table 3. From a clinical perspective, if a HAGMA is identified then the simplest approach to establishing a cause is to consider if the patient has one (or more) of the three common aetiologies (lactic acidosis, ketoacidosis or kidney failure) 12 . If these conditions are not present then the HAGMA may be linked to ingestion of a toxin e.g. methanol or ethylene glycol, or be due to the build-up of another acid such as 5-oxyproline (also known as pyroglutamic acid) which may accumulate with chronic paracetamol use in susceptible individuals 13 . As the laboratory tests for toxic alcohols are not rapidly available it can be useful in a patient with an unexplained HAGMA to assess the "osmolal gap" 14 . This "gap" is the difference between the calculated serum osmolality and the laboratory measurement of serum osmolality (from a U&E sample). The calculated osmolality can be simply derived by using Equation 6. A high osmolal gap suggests the presence of toxic alcohols such as methanol or ethylene glycol. Equation 6 Rarely, patients with short bowel syndrome or following bariatric surgery can develop severe D-lactic acidosis and an associated encephalopathy 15 . Unabsorbed carbohydrates act as a substrate for colonic bacteria to produce D-lactate. This will result in a high anion gap metabolic acidosis but the standard laboratory measured lactate (L-lactate) will be normal 15 . Calculated anion gaps that are low (below the reference interval) are uncommon. Causes include laboratory error or hypoalbuminaemia but rarely may be found in association with a paraproteinaemia or intoxication with lithium, bromide, or iodide 10 . Case 1 An elderly man is admitted with septic shock. Shortly after admission, blood tests reveal the following: Question 2: What is the bicarbonate? Bicarbonate is low, indicating that the acidosis is metabolic in nature. Question 3: What is the PaCO 2 ? The PaCO 2 is low, reflecting a respiratory alkalosis. The low level seen here is a reflection of the body's compensation in an attempt to correct the pH, i.e. a compensatory respiratory alkalosis is present. Question 4: What is the anion gap? The anion gap is high, indicating HAGMA. The most likely cause for this acid-base disorder is lactic acidosis due to poor tissue perfusion as a result of septic shock. Case 2 A woman is being treated for congestive cardiac failure on the coronary care unit. After several days of treatment, the following results are returned: The most likely cause for this acid-base abnormality is extracellular fluid volume loss and hypokalaemia due to treatment with diuretics. Case 3 An elderly woman with chronic obstructive pulmonary disease (COPD) is admitted with increasing confusion. Shortly after admission, blood tests reveal the following: Question 2: What is the bicarbonate? Bicarbonate is high, indicating that a metabolic alkalosis is present. The pH is low so the primary problem is an acidosis and is likely to be respiratory in nature. Question 3: What is the PaCO 2 ? The PaCO 2 level is just below the lower end of the normal range indicating a respiratory alkalosis is present. The pH is low so the primary problem is an acidosis (metabolic acidosis). The respiratory alkalosis therefore represents partial compensation of the metabolic acidosis. Question 4: What is the anion gap? The anion gap is 12 mmol/L, indicating that this is a normal anion gap metabolic acidosis. The most likely cause for this acid-base disorder is bicarbonate loss from the gastrointestinal tract due to diarrhoea. Returning to our initial case… Applying the four question approach to this case, it should now be apparent that the patient has a high anion gap metabolic acidosis with respiratory compensation. The common causes for this presentation can be quickly eliminated since his renal function is normal, and lactate and ketone levels are not elevated. A more unusual explanation for the presentation should be sought (see Table 2). In this case, the patient was subsequently found to have ingested 500 mL of screenwash containing ethylene glycol (antifreeze) in an attempt to end his life. Prompt recognition of the likely cause of this patient's high anion gap metabolic acidosis helps inform further investigation and management. This would include quantitation of ethanol and toxic alcohol concentrations to confirm the type of ingested poison. Ethylene glycol and methanol are metabolised by alcohol dehydrogenase to very toxic metabolites. If this diagnosis seems likely it is important to urgently seek senior help. Fomepizole is an alcohol dehydrogenase inhibitor which is easy to administer and prevents metabolism of these alcohols to their toxic metabolites. Haemodialysis will rapidly clear ethylene glycol, methanol and their metabolites and should be started if the patient is severely acidaemic or has evidence of end organ damage e.g. renal failure or visual loss. CONCLUSION Acid-base disorders are commonly encountered in clinical practice and a structured approach to assessment includes taking a history, performing a physical examination and careful interpretation of routine biochemical tests and arterial blood gas analysis. Additional investigations such as lactate, glucose, ketones or toxicology testing may be needed to more fully characterise a metabolic acidosis. Answering four questions will help determine the problems present in the clinical scenario: What is the pH? What is the bicarbonate? What is the PaCO 2 ? What is the anion gap? Using this approach will help guide further investigations and management of the patient. There are no conflicts of interest.
2017-10-24T12:50:18.829Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "3e608caf058e2fdc6b2b1fd61b3631a664589133", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b1e10ed7d724774179321ba820e6bb9993c0bef0", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261612558
pes2o/s2orc
v3-fos-license
Mitigating underreported error in food frequency questionnaire data using a supervised machine learning method and error adjustment algorithm Background Food frequency questionnaires (FFQs) are one of the most useful tools for studying and understanding diet-disease relationships. However, because FFQs are self-reported data, they are susceptible to response bias, social desirability bias, and misclassification. Currently, several methods have been created to combat these issues by modelling the measurement error in diet-disease relationships. Method In this paper, a novel machine learning method is proposed to adjust for measurement error found in misreported data by using a random forest (RF) classifier to label the responses in the FFQ based on the input dataset and creating an algorithm that adjusts the measurement error. We demonstrate this method by addressing underreporting in selected FFQ responses. Result According to the results, we have high model accuracies ranging from 78% to 92% in participant collected data and 88% in simulated data. Conclusion This shows that our proposed method of using a RF classifier and an error adjustment algorithm is efficient to correct most of the underreported entries in the FFQ dataset and could be used independent of diet-disease models. This could help nutrition researchers and other experts to use dietary data estimated by FFQs with less measurement error and create models from the data with minimal noise. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-023-02262-9. Introduction Food frequency questionnaires (FFQ) are often used in large prospective cohort studies to assess habitual dietary intake and understand diet-disease relationships [1].These questionnaires are faster to administer and take less resources to analyze in a large cohort compared to multiple 24-h dietary recalls (24HR) or multiday dietary food records (FR).Dietary assessment that utilizes 24HR may reduce measurement error; however, archetypal cohorts and some more recent studies use FFQs to measure dietary patterns.Cohorts, such With new technologies emerging, such as omics, clinical samples from these studies are driving new research questions that would benefit from including dietary information.For example, banked samples from the ARIC study have been used for metabolomics analyses [2].These types of analyses would benefit from the comparison between habitual dietary patterns gleaned from the available FFQs and omics data.Historically, dietary assessment has known limitations.Self-reported data of any kind, but especially dietary assessment data, introduces recall bias, response bias, social desirability bias, and misclassification [3][4][5][6].These ultimately render the dataset inefficient for any future predictions or studies, hereby limiting the range of new findings that can be drawn from these studies.Therefore, it is crucial to combat the measurement error challenges in these datasets for optimal usability. Currently, several methods have been used to adjust for measurement error.One of the most common methods is regression calibration, in which the conditional expectation of the true long-term intake of the variable replaces the FFQ intake given a vector of error-free covariates [7][8][9].This supports the assumption that there is underlying truth in the dataset.However, this method has some limitations.It relies heavily on the use of other tools such as 24HR which could introduce additional bias into the model.The solution is to find the most efficient way to use the FFQ dataset without relying on internal calibration. Participants, particularly those with a health condition, sometimes underreport or overreport certain types of food for a variety of reasons [10][11][12][13].We apply the assumption of underlying truth in each dataset, which could be determined from both the healthier participants and the known reasons for under or overreporting [10][11][12][13][14][15][16].For demonstration purposes, we obtained a dataset of university employees that were considered relatively healthy, with either no disease or well controlled disease.Each participant was asked to complete an FFQ at every study visit for the duration of the multi-year study.We used this dataset to build a predictive model to correct over and underreported responses in a full semi-quantitative food frequency questionnaire. Our objective was to reclassify misreported foods to adjust for known measurement error.We proposed a supervised machine learning approach which uses a random forest classifier to label the responses in the FFQ based on the input dataset.In addition, an algorithm was written based on the newly predicted class probabilities derived from the random forest model. FFQ data participants This work is based on information from the Emory Predictive Health institute and Center for Health Discovery and Well Being Database (CHDWB) which has been described previously [17].Briefly, the CHDWB cohort at Emory University in Atlanta, Georgia, USA, was an observational study designed to investigate the effects of clinical self-knowledge and health partner counseling on various health outcomes.In the present study, we included 819 participants for which complete FFQ data at various time points was available.Individuals with poorly controlled chronic disease or acute illness were excluded.Demographic information and potential covariates (e.g., body mass index and personal health history) were collected from the CHDWB cohort database.The FFQ was the Block 2005 [18] delivered in an electronic format.This questionnaire was filled out by the participant prior to study visits via an online portal.These were not verified by the study staff prior to summary calculations conducted by the developer (Nutritionquest, Berkeley, CA, USA).It is assumed that some entries are either underreported or overreported. Blood draws were performed in a fasting state and blood lipids and blood glucose were measured by commercially available assays (Quest Diagnostics, Madison, NJ, USA).Body fat percentage was determined using dual x-ray absorptiometry (Lunar iDXA, General Electric, Chicago, IL, USA).Weight was measured in athletic clothing without shoes on a research grade scale (Tanita, Tokyo, Japan) and height was measured using a standard stadiometer.BMI was calculated using kg of body weight divided by height in meters squared. Exploratory data analysis Initial data analysis included missing data assessment and correlation analysis.The CHDWB dataset contained demographics, clinical biomarkers, and FFQ data reflecting habitual diet in the past year.The original dataset contained 593 variables and 3193 unique samples, including missing data points.Heatmaps were used to visualize the correlations between food frequency and demographic information.Due to high correlations between variables, it was fair to assume a low rank data assumption, which allowed the underlying ground truth to be determined from the present data to infer accuracy. Variable selection Underestimation errors in the FFQ are the most common issues [19]; thus, we chose to focus our analyses on this problem.Variables were selected based on fat content as those foods are typically underreported [20].The four selected variables used as individual responses are the frequency and quantity of bacon consumed and the frequency and quantity of fried chicken consumed.The frequency count of the values of these variables can be seen in Fig. 1 These selected variables are ordinal.As mentioned above, these were used as responses in four different classification models where accurate responses were predicted. We chose the following as explanatory variables: blood levels of low-density lipoprotein (LDL), total cholesterol, and glucose, body fat percentage, and body mass index (BMI) [1].The explanatory variables selected for responses were chosen based on the assumption that they would have low measurement error because of their objective nature.These explanatory variables have proven relationships with frequency and quantity of bacon and fried chicken [14][15][16].Age and sex, which are generally reported accurately, were added as demographic explanatory variables. Training machine learning-based error adjustment model The proposed error mitigation approach relies on the premise that some groups of participants may be more likely to report their food consumption more accurately, while others tend to underreport/overreport their unhealthy/healthy food consumption.Another assumption made in this study is that some of the objectively measured variables including LDL cholesterol, total cholesterol, blood glucose, body fat percentage and anthropometric measures, and participant characteristics, including age and sex, are correlated with food consumption habits.For example, participants that have a high saturated fat diet may have high blood cholesterol concentrations [14][15][16]. The overview of the proposed framework is given in Fig. 3.We first split the dataset into two groups representing healthy and unhealthy participants.The healthy group data were defined by using certain cutoffs for the body fat percentage, age and sex which classified participants by their health risks (for the specific health risk classification table, please refer to Tables A2.1 and A2.2).While the participants with excellent, good and normal health risks have their responses defined as the healthy samples of the data consisting of 384 responses and 9 variables, the rest are defined as the unhealthy group data-consisting of 2238 responses and 9 variables.Then, based on the foregoing assumptions, we used the healthy group data to train a predictive model that quantifies the relationship between lab test variables and participant characteristics within the food frequency variables.Specifically, since the FFQ data are categorical we use random forest (RF) classification to build the predictive model.Using cross-validation, we tuned the hyperparameters and selected the tree depth that showed the best model performance and highest training accuracy [21].RF was selected over logistic regression due to higher performance, higher capability of capturing nonlinear relationships, robustness to overfitting, and ability to rank the importance of predictors [21]. After this relationship was learned, the trained predictive model was used to predict the food frequency variables for the unhealthy group based on their lab test results, BMI, sex, and age.Finally, the predicted value was compared to the original value reported by the participants in the FFQ dataset in the unhealthy group data where the likelihood of underreporting is higher.If the original FFQ response is smaller by any amount than the predicted value, it will be replaced by its prediction.Otherwise, it is kept unchanged or modified according to the procedure described in "Applying error adjustment model" section. Applying error adjustment model In the final step of the proposed error mitigation approach, the trained RF prediction model used the objectively measured variables, anthropometric variable and participant characteristics to determine the FFQ response category with the highest likelihood.Additionally, the prediction model can provide the likelihood of other categories for each response.For a response there are L categories of C (1) , C (2) , . . ., C (L) that are sorted in descending order with respect to their corresponding probabilities P (1) , P (2) , . . ., P (L) obtained by the RF model.First, the class with the highest probability, i.e., C (1) is compared with the reported response in the FFQ dataset.For healthy food where the likelihood of overreporting is higher, the FFQ response is replaced with the category lower than the reported FFQ response that has the largest probability, i.e., C (i) where i = argmax For unhealthy food where the likelihood of underreporting is higher, the FFQ response is replaced with the category higher than the reported FFQ response that has the largest probability, i.e., C (i) where i = argmax P (i) ; i = 1, 2, . . ., L ; C (i) > C R .A summary of this procedure is given in Fig. 2. Validation studies using simulation The purpose of using a simulation study was to evaluate the performance of our proposed method.Unlike the FFQ dataset, the ground truth is known in simulated data.The main goal was to analyze how the proposed model would perform when the data simulated is very similar to real data, in this case FFQ data. To simulate the dataset, we randomly generated a synthetic multinomial dataset using the make_classification function from Scikit-learn library.For simplicity, the synthetic dataset was meticulously engineered to emulate the characteristics observed in the FFQ data.For example, since the case study has 8 variables and 7 classes, the synthetic dataset was constructed to maintain those parameters, incorporating the 7 classes within its responses and 8 distinct variables across observations.In this study, we assumed the response represents consumption of unhealthy food (e.g., bacon frequency level).We followed this procedure to generate 1000 responses for 1000 simulated participants. To ensure that our method is robust, we also tried two other simulation settings.The second setting involved generating another synthetic multinomial dataset with a smaller number of categories.We chose 4 classes which is similar to the bacon and fried chicken quantity levels.In the third setting, we generated a synthetic multinomial dataset with more distinct variables across observations and more responses, i.e., 15 variables and 10,000 responses. The datasets were split into healthy and unhealthy subsets using the train and test split with a test ratio of 0.3 to mimic the process in the original food frequency dataset.This means 70% of each synthetic data was classified as the healthy subset and the rest were classified as the unhealthy subset.To induce underreported responses, responses from the unhealthy subset were randomly altered to lower categories such that 50% of responses decreased by one level, 20% decreased by two levels and finally, 10% decreased by three levels, and the rest remained the same. Next, following our proposed approach, we trained the error adjustment model using healthy group data and used the trained model to adjust the response for the unhealthy subset.Figure 3 Looking at the frequency of bacon consumed, the RF classifier model has a model accuracy of 84.4%.This model demonstrates a precision score of 0.801, a recall score of 0.805 and ultimately resulting in an F1 score of 0.807.In the confusion matrix, we see that many of the entries stay the same; however, a couple of entries moved to higher classes.In Fig. 4, about 50% of 'class 1' entries became 'class 2' , and 3% of 'class 2' entries became 'class 4' .Looking at the quantity of bacon consumed, the RF classifier model has a model accuracy of 87%.This model demonstrates a precision score of 0.826, a recall score of 0.818 and ultimately resulting in an F1 score of 0.820 .In the confusion matrix (Fig. 5), we see that many of the entries stay the same, with some changes detected in classes above the initial class.For instance, about 38% of 'class 1' entries became 'class 2' and 16% of 'class 2' entries became 'class 4' . For the frequency of fried chicken consumed, the RF classifier model has a model accuracy of 91.6%.This model demonstrates a precision score of 0.882, a recall score of 0.861 and ultimately resulting in an F1 score Fig. 4 Confusion matrix showing changes between the original and adjusted responses for bacon frequency Fig. 5 Confusion showing the changes between the original and adjusted responses for bacon quantity of 0.858.In the confusion matrix, we see that many of the entries stay the same; however, a couple of entries moved to higher classes.In Fig. 6, 54% of 'class 1' entries became 'class 2' , and 1.4% of 'class 2' entries became 'class 4' .Looking at the quantity of fried chicken consumed, the RF classifier model has a model accuracy of 93.1%.This model demonstrates a precision score of 0.912, a recall score of 0.902 and ultimately resulting in an F1 score of 0.896.In the confusion matrix (Fig. 7), we see that many of the entries stay the same, with some changes detected in classes above the initial class.For instance, about 91% of 'class 1' entries became 'class 2' and 4% of 'class 2' entries became 'class 3' . Simulation results From the simulation study, the RF classifier model has a model accuracy of 78.5%.This model demonstrates a precision score of 0.794, a recall score of 0.786 and ultimately resulting in an F1 score of 0.785.After applying the error adjustment algorithm, we saw that some of the entries in 'class 1' became 'class 2' entries.To ensure the proposed method worked using this simulated study, we compared the originally simulated data responses to the final simulated responses using another confusion matrix (Fig. 8).From this, we see that the underestimation algorithm accurately classified the classes with 82.06% average accuracy rate.ensure that our model is robust, the second setting resulted in a model accuracy of 90%.This model demonstrates a precision score of 0.901, a recall score of 0.900 and ultimately resulting in an F1 score of 0.90.After applying the error adjustment algorithm, we saw that some of the entries in 'class 1' became 'class 2' entries.Finally, we compared the originally simulated data responses to the final simulated responses using another confusion matrix (Fig. 9).From this, we see that the underestimation algorithm accurately classified the classes with 84.71% average accuracy rate. Furthermore, the third setting resulted in a model accuracy of 80.71%.This model demonstrates a precision score of 0.807, a recall score of 0.806 and ultimately resulting in an F1 score of 0.805.After applying the error adjustment algorithm, we saw that some a similar shift pattern in the entries.Finally, we compared the originally simulated data responses to the final simulated responses using another confusion matrix (Fig. 10).From this, we see that the underestimation algorithm accurately classified the classes with 82.47% average accuracy rate.These indicate that our proposed method worked as expected, since we knew the true original entries, and show that it is robust. Discussion As seen from the results, we have high model accuracies ranging from 77.5% to 91.6% in participant collected data.This shows that our two-step method of using RF classifier and an error adjustment algorithm is efficient in correcting most of the underreported entries in the FFQ dataset.Looking at the confusion matrices of bacon and fried chicken frequency, we can see that misclassification due to underestimation is greatly reduced as the self-reported classes are moved to their "true" classes, with accuracies of 83.1% and 91.6%, respectively.The same can be seen in the bacon and fried chicken quantity variables as the misclassified observations are adjusted and moved to their true classes with accuracies of 77.5% and 90.3%, respectively.In addition to this, the simulated study shows an accuracy of 78.5%, signifying that the proposed method performs exceptionally.To our knowledge, this is the first application of supervised machine learning methods to used to correct misclassification in FFQ data that does not require calibration data to be collected. Machine learning (ML) methods have been used to optimize prediction of FFQ data with methods such as dimensionality reduction; however, it has not been utilized previously in the correction of measurement error [22].Hence, in this paper, we explore the use of ML to adjust measurement error.Several machine learning models such as decision trees and multinomial logistic regression were considered for use as the classification model in this analysis.However, accuracy and model simplicity were chosen to be the most important characteristics for a good model; therefore, random forest proved to be the best performer.We have reflected this on our simulated dataset in the table below (Table 1).Random forest works as an aggregate of multiple random decision trees, which gives an accuracy advantage over other methods [21].In addition, it is possible to rank the most important variables influencing the responses [21]. Previous studies have shown the use of other methods such as regression calibration and generalized gamma regression to adjust for measurement error [8].These methods use a generalized linear model to show dietdisease association, and directly model bias in them.Knowing that true intake is generally measured incorrectly or is missing, these methods express the newly corrected data points as the conditional expected value of the unobserved true intake, given the observed data with error and error-free covariates [8].These new data points are then used as the observations for the dietdisease model and replace the previous observations.However, a lot of parameters, steps and instruments are involved in this process, hence contributing to the additional noise in the model.One of the instruments used, a 24HR recall has a distribution that is characterized by skewness due to excess zeros in the dataset.It is also characterized by heteroskedasticity, meaning higher variability than the FFQ dataset [23].The regression calibration method also involves a Box-cox transformation to normalize the 24HR recall data, and an inverse transformation to bring it back to its original scale.This means that between-person correlations would be lost.Generalized gamma regression combats this as the true intake is modeled as the product of the conditional mean and mean probability of the gamma distribution of the individual variables [8]. Our proposed method has many advantages over regression calibration methods.It does not require 24HR as an additional tool.It relies on the derived correlations and the underlying ground truth in the FFQ dataset, hence ensuring no unnecessary introduction of variability in the data or participant burden.This works because of low-rank assumption.It also does not involve a transformation of the variables or the introduction of any other distribution.This is because the subset of data containing the underlying truth in the FFQ dataset assumes normality.Our method considers the measurement error in the FFQ data as an aggregate of the measurement error in multiple covariates in the data and adequately adjusts the error concurrently.Another significant difference is that current methods are fully parametric, hence, inefficient.Our proposed method involves fewer parameters and is more computationally efficient.Finally, we see high accuracy measures for the models used, hence showing the efficiency of our proposed method. There are some limitations to be considered.Previous research uses energy intake calibration with known biomarkers, such as doubly labeled water or urinalysis, to determine true energy intake.However, FFQs are not designed to quantitatively estimate total energy intake, due to the finite list of food and beverages, and limited data on food specificity.In addition to this, though we have successfully derived a method to tackle incorrect observations caused by underestimations, future research should address the other FFQ measurement challenges which are overreported observations and missing data points.Knowing that food frequency questionnaires query a finite set of foods and beverages [19], it is fair to assume that certain foods will be omitted.This increases the issue of under-reporting; however, there are instances where over-reporting happens (e.g., vegetable consumption).These analyses will be done in further studies. Conclusion and future work This research presents an alternative and novel method to reduce the measurement error in FFQ datasets using the RF classifier model and an additional underreported data adjustment algorithm to recover the "true" predicted classes.This method efficiently reduces misclassification due to underestimation in self-reported dietary data estimated by FFQ. In future work, the ML techniques to adjust for the missing entries in the dataset and overreporting will be explored further.These have also contributed to the challenges faced by current researchers using the FFQ dataset.We will also consider the use of deep learning methods to accurately combat the missing data challenges and mitigate measurement error in the datasets.Machine learning has proven to be an invaluable tool for error adjustment and could be useful to address numerous measurement error problems. Fig. 1 Fig. 1 Bar plots showing bacon and fried chicken frequency (F) and quantity counts (Q).X-axis represents the consumption frequency(F) per year and quantity (Q) in cups, and Y-axis represents the frequency of participants (count) distributed among the respective categories depicts a summary of all the methods used.the final corrected entries of bacon frequency, bacon quantity, fried chicken frequency and fried chicken quantity are shown in Figs. 4, 5, 6 and 7 respectively. Fig.Fig. 7 Fig. Confusion matrix the changes between the original and adjusted responses for fried chicken frequency Fig. 8 Fig. 8 Confusion matrix showing the changes between the original simulated data response and the adjusted responses Fig. 9 Fig. 9 Confusion matrix showing the changes between the original simulated data response for the second setting and the adjusted responses Fig. 10 Fig. 10 Confusion matrix showing the changes between the original simulated data response for the third setting and the adjusted responses Page 2 of 11 Popoola et al.BMC Medical Informatics and Decision Making (2023) 23:178 as Reasons for Geographic and Racial Differences in Stroke (REGARDS) and Atherosclerosis Risk in Communities Study (ARIC), contain older versions of FFQs.
2023-09-09T13:33:20.258Z
2023-09-09T00:00:00.000
{ "year": 2023, "sha1": "4546bc1f5ad04dcd74228ceee38b1611f4196d26", "oa_license": "CCBY", "oa_url": "https://bmcmedinformdecismak.biomedcentral.com/counter/pdf/10.1186/s12911-023-02262-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e65f87b92b6482e87a507836191fed978e6f236", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
9367780
pes2o/s2orc
v3-fos-license
Aneurysmal Rebleeding : Factors Associated with Clinical Outcome in the Rebleeding Patients Objective : Aneurysmal rebleeding is a major cause of death and disability. The aim of this study is to investigate the incidence of rebleeding, and the factors related with patient’s outcome. Methods : During a period of 12 years, from September 1995 to August 2007, 492 consecutive patients with aneurysmal subarachnoid hemorrhage (SAH) underwent surgery at our institution. We reviewed the patient’s clinical records, radiologic findings, and possible factors inducing rebleeding. Also, we statistically analyzed various factors between favorable outcome group (FG) and unfavorable outcome group (UG) in the rebleeding patients. Results : Rebleeding occurred in 38 (7.7%) of 492 patients. Male gender, location of aneurysm (anterior communicating artery) were statistically significant between rebleeding group and non-rebleeding group ( p = 0.01 and p = 0.04, respectively). Rebleeding occurred in 26 patients (74.3%) within 2 hours from initial attack. There were no statistically significant factors between FG and UG. However, time interval between initial SAH to rebleeding was shorter in the UG compared to FG (FG = 28.71 hrs, UG = 2.9 hrs). Conclusion : Rebleeding occurs more frequently in the earlier period after initial SAH. Thus, careful management in the earlier period after SAH and early obliteration of aneurysm will be necessary. INTRODUCTION Aneurysmal rebleeding is a major cause of death and disability in patients with subarachnoid hemorrhage (SAH). However, few risk factors have been identified for rebleeding, and there are many controversies over these risk factors. Some authors reported that rebleedings occured more often in patients with large aneurysm, however others could not confirm this 2,12,16,19) . Also, many studies reported that there was highest risk of rebleeding within the first 24 hours, whereas other studies demonstrated after 3 days or no association with time period 1,17,20,22) . The aim of this study is to describe the incidence of rebleeding and to investigate the factors related with clinical outcome in the rebleeding patients. MATERIALS AND METHODS This study was approved by the Institutional Review Board. During a period of 12 years, from September 1995 to August 2007, 492 consecutive patients with aneurysmal SAH underwent neurosurgical clipping in our institution. We reviewed the patients' clinical records, radiologic findings, and various factors associated with rebleeding. The inclusion criteria were as follows : 1) rebleeding from ruptured aneurysm; 2) after initial SAH, rebleeding was confirmed by repetitive computed tomographic (CT) scan; 3) the patients underwent surgical clipping. The investigation had two arms. The first arm included the total number of patients with aneurysmal SAH who underwent surgery within 48 hours after initial event. We divided this population into two groups; rebleeding group (RG) and non-rebleeding group (NG). We compared demographic data (sex, age, and history of diabetes and hypertension), radiologic findings (size, location of aneurysm, and Fisher's grade) and clinical factors [Hunt Hess (H-H) grade, Glasgow Outcome Scale (GOS)] in each group. The second arm only assessed rebleeding patients. We divided this population into two groups; favorable outcome group (FG; GOS 4,5) and unfavorable outcome group (UG; GOS 1, 2, 3) after six months from initial SAH. We compared demographic, radiologic, and clinical factors between these two groups. In addition, rebleeding-related factors, including time interval from initial SAH to rebleeding, patient's circumstance of rebleeding, type of rebleeding, initial systolic blood pressure, systolic blood pressure after rebleeding, change in systolic blood pressure, and initial blood glucose. Comparisons between two groups were analyzed using the independent Student's t-test and chi-square test. All data was presented in mean ± standard deviation. The result less than p = 0.05 was considered statistically significant. Part I During this study period, 492 patients fulfilled the inclusion criteria. Mean age of the study population was 53.5 ± 12.0 years, 182 were men (37%), and 310 were women (63%). Rebleeding occurred in 38 patients (7.7%). Initial H-H grade, Fisher's grade, male gender, and location of aneurysm (anterior communicating artery) were significantly different ( Table 1). Part II A total of 35 patients fulfilled the inclusion criteria. Three patients were excluded because rebleeding-related factors were unidentified. Mean age was 52.5 ± 11.3 years, 18 were men (51.4%), and 17 were women (48.6%). Among these patients, 14 patients (40%) showed favorable outcome and 21 patients (60%) showed unfavorable outcome after 6 months from initial SAH. DISCUSSION In this study, the incidence of aneurysmal rebleeding was 7.7%. Although previous studies 3, 6,21) have reported the range of incidence of aneurysmal rebleeding was between 17.3% 2,16,23) , ranging from 4.0% to 9.7%. This discrepancy could be explained by the development of endovascular and surgical therapies, and advances in intensive care management. Some investigators have reported the timing of rebleeding after initial SAH. In a series of 273 patients, Ohkuma et al. 17) reported the peak time of rebleeding was within 2 hours (77%) after the initial SAH. In 2007, Tanno et al. 23) reported that overall rebleeding occurred in 88 out of 181 patients within 6 hours (48.6%). Consistent with these reports, we found that the most of rebleeding occurred within 2 hours (74.3%) after ictus in our series. Accordingly, these findings suggest aneurysmal rebleeding occurs more frequently in the earlier period after the initial SAH than previous reports 14,22) . In the analysis of factors associated with aneurysmal rebleeding by Naidech et al. 16) , H-H grade on admission and maximal aneurysm diameter were independent predictors of rebleeding. On the other hand, Ohkuma et al. 17) reported the significant differences in H-H grade, rate of intracerebral hematoma, intranventricular hematoma, subdural hematoma on CT scan at the time of admission, operability and prognosis between RG and NG. Also, systolic arterial pressure ≥ 160 mmHg was a possible risk factor of rebleeding. In 2006, Pleizier et al. 13) suggested that the hazard ratio of rebleeding in large versus small aneurysms was 1.6 [95% confidence interval (CI) 1.0-2.6]. In this study, we found that initial H-H grades, Fisher's grades, and GOS were significant rebleeding factors. Another interesting finding of this study was that male gender and location of aneurysm (anterior communicating artery) were significant different factors between RG and NG. Previous reports suggested that in male gender, ruptured aneurysms were more common in the anterior cerebral artery 7,18) . Although there was no significant difference, a recent retrospective study showed that aneurysm was located in the anterior communicating artery (30.4%) and the internal carotid artery (26.0%) in the rebleeding patients 23) . Hemodynamic factors are more important in aneurysm formation in male gender due to size of the anterior cerebral artery and its complexities 8,9,13) . The relationship between aneurysmal rebleeding and hemodynamic factor can explain why rebleeding in male gender and anterior communicating artery is more common. Also, it seems reasonable to speculate that gender-related predisposition to rebleeding might reflect prevalent location of aneurysm and anatomical relationship between aneurysm and surrounding structures. Further investigation for epidemiology of the rebleeding patients will be necessary. Multiple clinical factors were evaluated for their clinical significance to predict the outcome in our series. Although there were no significant clinical factors, the time interval between initial SAH to rebleeding was longer in FG. This suggests that careful management and early intervention for obliteration of aneurysm could yield to favorable outcome in the rebleeding patients. Some previous reports have mentioned the prevalent circumstances of rebleeding. In large series of 559 patients, Sasaki et al. 21) reported that 19.9% of rebleeding from the ruptured aneurysm had occurred during transfer. In 2006, Ohkuma et al. 17) reported that 13.6% of patients suffered from rebleeding in the ambulance or at the referring hospital before admission. We found that out-hospitalization rebleeding rate was 37.1%. Our results were somewhat higher than the studies above. This discrepancy may be due to the difference of emergency medical delivery system. Consistent with previous reports [3][4][5][6]10,11,15) , we experienced rebleeding during angiogram (4 cases) and anesthesia (5 cases). Thus, more careful attention will be necessary while performing these procedures. Limitations of this study need to be mentioned. First, we only addressed the patients who underwent neurosurgical clipping. From January 2004 to December 2007, we investigated 261 patients with spontaneous SAH whom were admitted to our institution. Definitive treatment (clipping) of SAH was provided in 158 patients and 103 were untreated. Rebleeding was identified in 19 patients (7.3%), 7 in the clipping group and 12 in the untreated group (Fig. 3). However, we could not completely identify rebleeding, because 35 among untreated aneurysmal SAH patients were suddenly expired or transferred to other hospitals. Also, we could not perform follow up brain CT scan to confirm the rebleeding in all SAH patients due to financial difficulties, refusal from relatives, and moribund status. Therefore, the true incidence of aneurysmal rebleeding was somewhat higher than that of this study and further prospective investigation will be needed. Another limitation was the retrospective design of this study. CONCLUSION Rebleeding occurs more frequently in the earlier period after initial SAH. We should consider the possibility of rebleeding during early angiogram and anesthesia. Early obliteration of aneurysm would be mandatory.
2018-04-03T00:00:36.906Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "d4a8011dfc92af28cdda1cd2e45645c0823113ae", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3340/jkns.2010.47.2.119", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c09a161c0c150e12b163f441deed96caac8f766d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119292252
pes2o/s2orc
v3-fos-license
Tracing the accretion history of supermassive Black Holes through X-ray variability: results from the Chandra Deep Field-South We study the X-ray variability properties of distant AGNs in the Chandra Deep Field-South region over 17 years, up to $z\sim 4$, and compare them with those predicted by models based on local samples. We use the results of Monte Carlo simulations to account for the biases introduced by the discontinuous sampling and the low-count regime. We confirm that variability is an ubiquitous property of AGNs, with no clear dependence on the density of the environment. The variability properties of high-z AGNs, over different temporal timescales, are most consistent with a Power Spectral Density (PSD) described by a broken (or bending) power-law, similar to nearby AGNs. We confirm the presence of an anti-correlation between luminosity and variability, resulting from the dependence of variability on BH mass and accretion rate. We explore different models, finding that our acceptable solutions predict that BH mass influences the value of the PSD break frequency, while the Eddington ratio $\lambda_{Edd}$ affects the PSD break frequency and, possibly, the PSD amplitude as well. We derive the evolution of the average $\lambda_{Edd}$ as a function of redshift, finding results in agreement with measurements based on different estimators. The large statistical uncertainties make our results consistent with a constant Eddington ratio, although one of our models suggest a possible increase of $\lambda_{Edd}$ with lookback time up to $z\sim 2-3$. We conclude that variability is a viable mean to trace the accretion history of supermassive BHs, whose usefulness will increase with future, wide-field/large effective area X-ray missions. INTRODUCTION Flux variability is a defining characteristic of Active Galactic Nuclei (AGNs), reflecting the small spatial region in which the observed emission is produced and the production mechanism itself (Fabian 1979;Rees 1984). AGNs are observed to vary on all timescales, and across the whole electromagnetic spectrum, although the maximum power and fastest variations are found at the highest energies (X-rays and γrays), due to the fact that such radiation is mainly generated close to the central engine and over small spatial regions (Ulrich et al. 1997). In the X-ray band in particular, extended and detailed observations of nearby sources, have revealed that the variability is characterised by 'red noise' behaviour, with more power existing on the longest timescales, in close resemblance to what is observed in binary accreting systems containing smaller, stellar-mass Black Holes (BH). The origin of the X-ray variability itself is not well understood, and both internal (instabilities of the accretion flow, flaring corona, orbiting hotspots) and external (variable obscuration, micro lensing) phenomena have been proposed to explain the flux variations. Early investigations of AGN variability did not show any distinct features in their Power Spectral Density (PSD), suggesting that the PSD has a pure power-law shape (Green et al. 1993;Lawrence & Papadakis 1993) which unfortunately, has little power to discriminate among variability models. However several factors (analogies with galactic binaries, dependence of the emission on the physics of the accretion process, unphysical behaviour when extrapolated to long timescales) indicated that some characteristic timescale should be observable in the PSD. More recently, the combination of long observing campaigns over several decades, and shorter high-quality XMM-Newton observations, has allowed the discovery of at least one, and in some cases two, breaks in the PSD of nearby AGNs (Uttley et al. 2002;Papadakis et al. 2002;Markowitz et al. 2003;McHardy et al. 2007). Such features seem linked to both the BH mass and the properties of the accretion flow, and have enabled using variability to test the properties of the accretion flows as well as to measure the main physical parameters (BH mass, accretion rate) of the AGN. Similarly, we have been able to see extreme cases of variability induced by varying column densities of obscuring material, in type 1 (Yang et al. 2016), type 2 AGNs (e.g. Risaliti et al. 2011;Giustini et al. 2011;Risaliti et al. 2009) and in BAL quasars (e.g. Lundgren et al. 2007;Gibson et al. 2008Gibson et al. , 2010. Most of our knowledge about AGN variability in the X-ray band is derived from extensive observations of nearby and mostly low-luminosity AGNs, as these were the only ones initially accessible by low-effective area and/or low spatial resolution instruments. Such facilities have been the only ones allowing the long and regular monitoring campaigns required to avoid the problems introduced by low statistics and irregular sampling in the temporal analysis. The extension of such studies to a larger population of distant sources requires both a large effective area and a good angular resolution to avoid crowding effects. Progress was made adopting a less sophisticated approach, measuring the integrated power over long timescales in an attempt to investigate the variability properties of AGNs over cosmological volumes. For in-stance Almaini et al. (2000); Manners, Almaini & Lawrence (2002) studied samples of QSOs selected from ROSAT surveys up to z 4; Paolillo et al. (2004) analysed the variability properties of AGNs in the Chandra Deep Field-South (CDF-S) using Chandra data, Papadakis et al. (2008) and Allevato et al. (2010) used XMM-Newton observations to study the variability of AGNs in the Lockman Hole and the CDF-S respectively, Vagnetti et al. (2011) and Middei et al. (2017) investigated serendipitous XMM-Newton,Swift and ROSAT samples, while Shemmer et al. (2014Shemmer et al. ( , 2017 explored a group of luminous quasars combining ROSAT, Chandra and Swift observations. These works have shown that variability is ubiquitous in AGNs and that it has similar properties to those of nearby and less luminous AGNs, but also suggested that its amplitude may increase with lookback time and is possibly a tracer of the higher average accretion rates present in the earlier Universe. The results so far are not conclusive and suffer from biases due to sparse sampling and low statistics, as well as randomness intrinsic to red-noise processes. For instance, Gibson & Brandt (2012), studying a serendipitous sample of SDSS spectroscopic quasars, confirmed several results of previous works but failed to detect any clear evidence of increased variability at large redshifts. Similar conclusions were reached by other authors: Mateos et al. (2007) and Lanzuisi et al. (2014) studying the XMM-Newton light curves of AGNs in the Lockman Hole and COSMOS fields, and by Vagnetti et al. (2016) using the MEXSAS serendipitous sample. The CDF-S represents the deepest observation of the Universe in X-rays. As discussed above, the first 1 Ms data were used by Paolillo et al. (2004) to investigate the nature of variability in distant AGNs, but many of their results were only marginally significant due to the low number of sources, the limited availability of spectroscopic redshifts and the limited timescale coverage. This dataset has grown over time to span, with the 7 Ms data presented in Luo et al. (2016), a time interval of ∼ 17 years, reaching a depth of 1.9 × 10 −17 , 6.4 × 10 −18 and 2.7 × 10 −17 erg cm −2 s −1 for the 0.5 − 7, 0.5 − 2 and 2 − 7 keV bands respectively, and has accumulated a wealth of ancillary multi-wavelength data. This work is thus intended to test and extend the previous results, and link them to our knowledge based on nearby samples. We have already exploited these data in part in Shemmer et al. (2014) where we used the 2 Ms CDF-S light curves to compare the bulk variability of radio-quiet AGNs to bright high-redshift quasars, in Young et al. (2012) where we used the 4 Ms data to detect the faint AGN population in normal galaxies by means of variability, and in Yang et al. (2016) to investigate the long-term variability of AGNs. Here we present a more refined analysis of the 7 Ms light curves probing different temporal timescales in order to understand the connection between variability and AGN physical properties at z > 0.5. The paper is organised as follows: in §2 we discuss the data and the lightcurves extraction process, §3 explains how we detect and characterise the population of variable sources, §4 explains how we measure the average variability and study its dependence on the properties of the AGN population, §5 discusses how we test different variability models and use them to constrain the AGN accretion history with lookback time. Finally, in §6, we discuss our results and present our main conclusions. THE DATA The CDF-S data used here are those described in detail in Luo et al. (2016, also see Luo et al. 2008aXue et al. 2011). For completeness we shortly summarize here the main properties of the dataset, referring the reader to the above papers for a thorough discussion of the data properties. The dataset consists of 102 observations collected by Chandra between 1999 and 2016, adding up to a total exposure time of 6.727 Ms; the individual observations have exposure times ranging from ∼ 9 ks up to 141 ks, and have very similar aimpoints within ∼ 1 ′ , although different roll angles (see Table 1 in Luo et al. 2016). The data reduction procedure adopted in order to create event lists and exposure maps is described in detail in Luo et al. (2016) and we refer the reader to that paper for details. In order to study the temporal behaviour of the AGN population, we extract AGN lightcurves following the same procedure adopted by Paolillo et al. (2004) for the 1 Ms dataset. We start from the main source catalog of Luo et al. (2016), consisting of 1008 X-ray sources, which represent our "main sample"; we ignore instead the Supplementary Near-Infrared Bright Catalog in Luo et al. (2016), as these sources are all too faint to be useful in our analysis. For each source we measured counts within a circular aperture with variable radius R S depending on the angular distance θ in arcsec, from the average aimpoint: R S = 2.4 × FW H M arcsec, where FW H M = i=0,2 a i θ i is the estimated full width half maximum of the point spread function (PSF) and a i = {0.678, −0.0405, 0.0535} (see Giacconi et al. 2002, for further details). The only difference with respect to Paolillo et al. (2004) is that the minimum radius was set to 3 arcsec, in order to exploit fully the sharp Chandra PSF in the FOV center, and minimise the cross contamination between nearby sources 1 . Similarly the local background for each source was measured in a circular annulus of inner (outer) radius R S + 2 (R S + 12) arcsec. Neighbouring objects were always removed from the source or background region when they overlapped. This approach is less sophisticated than the one adopted by Luo et al. (2016) who used the ACIS EX-TRACT software to model the Chandra PSF and extract fluxes within polygonal regions; however it has the advantage of being simpler and avoiding the low S/N wings of the Chandra PSF (also see Vattakunnel et al. 2012). A comparison between our total fluxes and those derived in Luo et al. (2016) shows that on average our extraction procedure recovers 95% of the total source flux. In any case we stress that we use our aperture photometry only for the variability analysis, while turning to the more accurate Luo et al. (2016) photometry to obtain total fluxes and luminosities. We binned the data into individual observations: this allows derivation of lightcurves with 102 points over an 17 years interval. Sources near the edge of the detector may be missing in several observations due to the different aim-point and roll-angles of each pointing. We follow Paolillo et al. (2004) in retaining only the epochs where > 90% of the source region and > 50% of the background region falls within the FOV. In any case 884 (88%) of our sources have lightcurves with at least 50 bins and 758 (75%) are sampled by all 102 observations. The lightcurves were extracted both in the full 0.5-8 keV band, and in the 2-8 keV rest-frame band for the 986 sources with available redshifts 2 . Examples of CDF-S lightcurves are shown in Figure 1. A short movie showing the variability of sources in the entire CDFS field can be found at http://people.na.infn.it/paolillo/MyWebSite/CDFS.html . FINDING VARIABLE AGNS To assess the significance of variability of the sources in the main sample, we compute the χ 2 of each lightcurve defined as where N obs is the number of observations in which the sources fall inside the FOV, x i and σ err,i are the count rate and its error measured in the i th observation after background subtraction, and correcting for exposure and effective area variations 3 , andx is the average count rate extracted from the stacked 7 Ms data. We then compare the measured χ 2 with the expected value based on a set of 1000 simulations of each source, assuming a constant flux, as done in Paolillo et al. (2004). The simulations reproduce all the actual data properties, including Poisson noise, background and exposure. This allows accounting for the very large deviation from Gaussianity which affects the low-count regime, and prevents the use of any analytical expression based on such an assumption. We flag as variable, sources with P(< χ 2 ) > 95%, finding that 165 out of 1008 (16%) of the sources are variable. This fraction however is affected by the low statistics for the majority of the sources (70 median counts) and, to a lesser extent, the contamination at low fluxes by normal galaxies with L X 10 42 erg s −1 (where L X is the rest-frame 0.5-8 keV luminosity calculated as described below). In Figure 2 we plot the cumulative fraction of variable sources, showing that at high count levels all sources are found to vary. The plot confirms the trend observed by Paolillo et al. (2004) in the 1 Ms dataset, and Young et al. (2012) and Yang et al. (2016) in the 4 and 6 Ms data with lower time resolution, that variability is more easily detected in higher S/N sources and supports the view that all AGNs are intrinsically variable on a broad range of timescales. We note that Paolillo et al. (2004) implemented also an additional variability estimate ( χ 2 max ) based on the maximum deviation from the mean, observed among all the bins in the lightcurve. This approach was useful both because in the 1 Ms dataset we had at most 11 points in each lightcurve and thus low variability levels could be hard to detect, and Figure 1. Example of CDF-S lightcurves in the 0.5-8 keV band, for sources with different total counts and temporal behaviour; given the sparse cadence of CDF-S observations, each panel groups nearby observations separated by large temporal gaps; specifically the first 3 panels represent the 1st Ms data presented by Giacconi et al. (2002) and used in Paolillo et al. (2004), the 4th, 5th and 6th panels cover the additional 1, 2 and 3 Ms presented respectively by Luo et al. (2008a) also to detect short transient events. The 7 Ms dataset however has much better sampled lightcurves, and spans longer timescales, thus probing lower frequencies where the variability power of AGNs is expected to be larger due to the rednoise PSD. Moreover Luo et al. (2008b) searched the first 2 Ms of CDF-S data, finding no evidence that short-duration events (durations of few months, such as stellar tidal disruptions) may dominate the observed variability, and only a few fast-transient has been observed over the full 7 Ms data (Bauer et al. 2017, Zheng et al., in preparation). In this work we thus concentrate on variability estimates which are averaged over many epochs. The X-ray luminosity vs redshift distribution of sources from the 7 Ms Luo et al. (2016) sample is presented in Figure 3. The rest-frame 0.5-8 keV luminosity was calculated by Luo et al. (2016) modelling the X-ray emission using a power-law with both intrinsic and Galactic absorption; the column density was constrained finding the value that best reproduced the observed hard-to-soft band ratio, assuming an intrinsic power-law photon index of Γ int = 1.8 for AGN spectra. Variable sources (solid circles) are detected up to z ∼ 5, but lie preferentially among the brightest sources at any redshift due to the large number of counts required to detect flux changes (see also Figure 4). As was the case also in the 1 Ms data, there are several variable sources below the L X = 10 42 erg s −1 limit, often adopted to separate AGNs from normal galaxies; this is not surprising since many galaxies are expected to host low-luminosity AGNs whose emission significantly contributes to the overall galaxy Xray luminosity. In fact Young et al. (2012) already searched the 4 Ms data to identify LLAGNs, using longer integration timescales (4 × 1 Ms bins) in order to increase the likelihood of detecting variability in faint sources. MEASURING THE AGN VARIABILITY AMPLITUDE To quantify the variability power we compute the normalized excess variance, as defined by (Nandra et al. 1997;Turner et al. 1999): where x i and σ err,i are, again, the (exposure time and effective area corrected) count rate and its error in i-th bin, x is the average count rate of the source from the stacked 7 Ms data, and N is the number of bins used to estimate σ 2 N XS . Almaini et al. (2000) note that the excess variance, as defined above, is a maximum likelihood (ML) estimator of the intrinsic lightcurve variance only in the case of identical normally distributed errors; if this is not the case the authors point out that there is no exact analytic ML solution that allows estimation of the intrinsic variance thus requiring a numerical approach. However, Allevato et al. (2013) have shown that in practical applications, with realistic lightcurves and sparse sampling, the two approaches yield identical results, as expected by the fact that the sources of uncertainties described below are much larger than those introduced by the use of an approximate solution. For such reason we prefer to use the excess variance as commonly done in the literature. Furthermore, while Antonucci et al. (2014) and Vagnetti et al. (2016) warn about possible biases introduced by the comparison of the excess variance in sources at different redshifts, we note that this bias only originates from an improper use of this estimator if one does not account for the different rest-frame timescales. The formal error on σ 2 N XS is given by Turner et al. (1999), assuming stationarity and uncorrelated Gaussian processes. Subsequently Vaughan et al. (2003) provided an alternative approach, more suited to compare the temporal behaviour in different energy bands. These estimates however only account for measurement errors and not for the random scatter intrinsic to any red-noise process. In addition, as shown in Allevato et al. (2013), the irregular sampling pattern in the case of sparsely sampled lightcurves should also introduce additional scatter that will depend on the sampling scheme and the intrinsic (and a-priori unknown) PSD shape. Figure 4 shows the excess variance of the main sample as a function of total source counts and average S/N ratio per bin 4 in the full Chandra energy band. At low count rates or low S/N ratios there is a very large scatter in the excess variance and a significant fraction of sources have negative values. As the S/N increases (average S/N per bin 0.8, corresponding roughly to total counts 350, see Figure 4) the distribution skews significantly toward positive excess variances, reflecting the improved ability to measure the intrinsic source variance. We measure a median variance σ 2 N XS = 0.14 +0.16 −0.08 corresponding to count rate fluctuations of ∼ 40% (σ N XS = 0.37) for variable sources with > 350 counts, where the uncertainties are the lower and upper quartiles. This is ∼ 30% larger than observed by Paolillo et al. (2004, where σ N XS = 0.28), as expected if AGNs have a red-noise PSD whose power increases on the longer timescales sampled here, and possibly also due to the fact that we are probing fainter sources 5 which tend to be intrinsically more variable (see §4.1). Motivated by the discussion in the previous paragraph, we decided to create two new samples of sources with: 1) S/N per bin > 0.8, in the observed (0.5-8 keV) and the rest-frame (2-8 keV) bands, and 2) more than 90 points in their lightcurves (to exclude sources at the edge of the field-of-view sampled only by part of the observations). We name them as the the "bright-O" and "bright-R" samples, respectively. The S/N lower limit value of 0.8 is reinforced by the results of Allevato et al. (2013) who showed that such a threshold is necessary to measure accurately the excess variance in sparsely sample data, as long as we average 10-20 individual measurements. For the bright-R sample all quantities, including luminosities and excess variance, are computed in the rest-frame 2-8 keV band. We also used the Extended Chandra Deep Field-South radio catalog by Bonzini et al. (2013), which classifies radio sources based on their infrared 24 µm to radio 1.4 GHz flux density ratio, to identify radio-loud AGNs in our samples (see their §3.2). There are 6 and 5 radio-loud AGN in the bright-O and bright-R samples, respectively. Although we do not detect a significant difference in the average variability of such sources from the rest of the AGN population, we decided to remove them anyway from the subsequent analysis as the physical origin of the variability may be different (e.g. originating from the jet). The final number of sources in the bright-O and bright-R samples is 110 and 94, respectively. The different sizes of the two samples is due to the presence of sources with missing redshift and to the lower average S/N in the rest-frame band. Dependence of AGN variability on luminosity, redshift and local density In Figure 5 we plot the measured σ 2 N XS against the (absorption-corrected) X-ray luminosity for sources in the bright-O and bright-R sample (grey points). The black crosses show the mean σ 2 N XS (and luminosity) over 15 sources. This mean should be representative of the intrinsic excess variance (assuming that all sources in a given luminosity bin have a similar intrinsic σ 2 N XS ). We point out that it is a common mistake to remove non-variable or negative σ 2 N XS sources. As shown in Allevato et al. (2013), the σ 2 N XS distribution extends to negative values especially for low variability/low S/N sources; removing these values would bias the variability ensemble estimates. We therefore used the excess variance measurements of all sources in each bin, irrespective of whether they are positive or negative, to estimate the mean σ 2 N XS . The error of the individual points in Fig. 5 take into account the measurement error of the points in the light curves, only, and have been estimated using the equations in Turner et al. (1999), as discussed above. Instead, the error of the mean excess variance in each bin are estimated following Allevato et al. (2013), and should be representative of the true, overall uncertainty of the mean excess variance. Figure 5 shows that AGN variability is anti-correlated with (unabsorbed) X-ray luminosity, thus confirming the results obtained in previous investigations of the CDF-S on different timescales by Paolillo et al. (2004); Young et al. (2012); Shemmer et al. (2014); Yang et al. (2016). The two panels in the Figure show that the dependence of the variability amplitude on X-ray luminosity is very similar between the bright-O and bright-R samples. However, in order to: 1) avoid the effects due to the absorbing column and its variations in time, which mainly influence energies below 2 keV, 2) to eliminate complications in the interpretation of our results due to the differences in the PSD between the soft and hard band seen in some local AGNs (e.g. McHardy et al. 2004b), and 3) to allow a better comparison of our results with those from with variability studies of local AGNs on long time scales, which are mainly based on RXTE data and focus on the 2-10 keV energy range, from now on, we will only use the rest-frame 2 − 8 keV measurements of the bright-R sample. Figure 6 shows the dependence of the average excess variance (including both variable and non-variable sources) on redshift. The dashed line in this figure shows the running average of the excess variance measurements of the sources in the bright-R sample and its error (estimated as explained above). We observe the variability amplitude to decrease with increasing redshift, which is consistent with the fact that at hight-redshift we probe higher luminosity sources (see Figure 3) which are intrinsically less variable ( Figure 5). It is interesting to compare the dashed and the solid lines in Fig. 6. The solid line indicates the volume density of the CDF-S sources as a function of redshift (in arbitrary units, so that it can be easily compared with the dashed line). The CDF-S region is characterised by several overdensities in redshift space, due to the presence of large-scale structures (e.g. Gilli et al. 2003). The most prominent one, at z ≃ 0.7, also contains a large number of variable sources down to L X ≃ 10 41 erg s −1 . The comparison of the mean excess variance with the local volume density does not any correlation above the statistical uncertainty. The lack of any excess of the average excess variance, coincident with the z ≃ 0.7 density peak, suggests that the variability amplitude in these AGN is not affected by environmental effects that could trigger and enhance variability through, e.g. enhanced accretion processes or dynamical instabilities. Variability dependence on timescale In addition to luminosity and redshift, if the intrinsic variability process has a "red-noise" character, the excess variance also depends on the rest-frame duration of the light curves, which usually span a fixed time interval in the observer's frame. To investigate this issue, we computed the excess variance of each object in the bright-R sample on 4 different timescales (in the observer's frame): 6005 days, 654 days,128 days and 45 days (the '7 Ms', 'long', 'intermediate' and 'short' timescales, respectively). The first time interval is simply the total duration of the full 7 Ms dataset. The 'long' timescale measurement was obtained using data only from the last 3 Ms which correspond to the points covered by the light grey bar in the top row of Figure 1, between 5350 and 6000 days. The 'intermediate' excess variance was Figure 1). The excess variance of the shortest timescales is the hardest to estimate since the variations on these timescales are usually dominated by statistical noise. To increase the reliability of our measurement we averaged the variance measured from 6 different short time intervals (420-450, 2900-2950, 3800-3840, 3860-3900, 3910-3940, 5455-5500 days, dark grey bars in Figure 1) where the Chandra observations had a dense cadence and the sampling is more uniform. As shown by Vaughan et al. (2003) and Allevato et al. (2012) averaging over multiple observations allows to reduce the the intrinsic scatter on σ 2 N XS . To reduce further the uncertainty of the excess variance estimates, we limited the analysis to objects whose lightcurves have an average S/N per bin > 1.5. Since the sampled timescales correspond to different rest-frame timescales at different redshifts, we grouped our sources (up to z ∼ 2) in the 2 redshift intervals listed in the legend of Figure 7. They were defined in such a way that the interval width was kept as small as possible to reduce the internal difference in rest-frame timescales, and at the same time there were at least 15 sources in each bin. The average σ 2 N XS of all the sources in each redshift bin is shown in Figure 7, as a function of the maximum rest-frame timescale (estimated at the mean z of each bin). At each timescale, the higher redshift measurements are systematically smaller than the average variability amplitude in the lower redshift bins. This is due to the different luminosity ranges sampled at each redshift. However, the important result is that, for both redshift bins, the variability amplitude clearly decreases toward shorter rest-frame timescales. Although the data plotted in Figure 7 are not directly PSD measurements (since the excess variance estimates the integral of the PSD between the minimum and maximum sampled timescale), the decrease of the excess variance with decreasing time scale is direct observational evidence for the red-noise nature of the variability process of the high-redshift AGNs. To demonstrate that this is indeed the case, in Figure 7 we overplot a model prediction based on the assumption that the average intrinsic PSD has a power-law shape. The black solid line shows σ 2 mod when the PSD is a single powerlaw of the form PSD(ν) = Aν −1 (this model is appropriate for local AGNs on long timescales, e.g. Uttley et al. 2002;Markowitz et al. 2003;McHardy et al. 2004aMcHardy et al. , 2006. In this case σ 2 mod = A ln ν ma x ν mi n where ν min and ν max are the lowest and highest sampled rest-frame frequencies. We fixed ν max = (1 + z)/(86400 ∆t obs min ) s −1 where ∆t obs min = 0.25 d 6 , using the average redshift of the low-z sample. We also chose A so that the model excess variance matches the excess variance of the longest timescale for the low-redshift bin, in order to display the model behaviour. Qualitatively, the model predictions (decrease of excess variance with decreasing t) are similar to what we observe. However, the model has a shallower slope than the observed one. This suggests that such a flat PSD, typical of local AGNs on long timescales, is inadequate to describe the measured excess variance on short timescales. A bending powerlaw model with a high-frequency cutoff, described in detail in §5.1, does a much better job in reproducing the observed trend. We believe that Figure 7 not only demonstrates that the high-redshift AGNs have PSDs which are well represented, on average, by a power-law, but also that their PSD "breaks" above some characteristic frequency, as observed in several nearby AGNs. TESTING VARIABILITY MODELS AND TRACING THE AGN ACCRETION HISTORY The discussion above demonstrates that it is difficult to draw firm conclusions about the dependence of the variability amplitude on the underlying AGN physical parameters and its evolution, based on the excess-variance vs luminosity/redshift/timescale plots. However, with the CDF-S data, we can now study more accurately the σ 2 N XS − L X relation at different redshifts, by treating properly the differences in sampled luminosities and timescales (due to differences in z). To this end we divided the bright-R sample sources in four redshift intervals and we computed the average σ 2 N XS of sources in luminosity bins containing at least 15 sources. We considered the two redshift bins that we also considered in §4.2, and to increase the redshift range we also considered the [1.8 -2.75] and [2.75 -4] redshift bins 7 . The four columns of Figure 8 show the σ 2 N XS measurements in the four different redshift intervals plotted as a function of X-ray luminosity, for the four different timescales discussed in §4.2. The decrease in variability with increasing X-ray luminosity is confirmed on most timescales and redshift bins, at least up to z ∼ 2 where we probe a large enough range of luminosities. Arguably, the uncertainty on the individual points is large, but we do not observe any significant increase of variability amplitude with redshift, in any of the timescales we considered. The amplitude of the σ 2 N XS − L X relations increases with increasing timescale, which is caused by the red-noise character of the observed variations. On the shortest timescales, the anti-correlation between σ 2 N XS and L X is steep. Then it flattens as we sample increasingly longer time intervals; this behaviour is in agreement with the scenario where the intrinsic PSD is represented by a bending power-law with a high-frequency cutoff. In order to to understand the observed complex dependence of the variability-luminosity relation in various redshift bins and over different timescales, we fitted the data shown in Figure 8 with predictions of PSD models, which are frequently used to parametrize the observed power-spectra of nearby, X-ray bright AGN, as we explain below. To constrain better the models at the lowest redshifts, which are not sampled by the CDF-S population, we also considered the data from the sample of local AGNs studied by Zhang (2011). The Zhang (2011) lightcurves are based on 14-year long RXTE monitoring campaigns, closely matching the full 7 Ms observed timescales. Note that the RXTE monitoring cadence only allows us to probe the longest timescales, with no equivalent to the additional long, intermediate and short timescales probed for the CDF-S sources. Therefore, the Zhang (2011) data are thus only shown in the rightmost panels of Figure 8. Modeling the AGN variability The AGN PSDs have been modelled in the past by either a simple power-law, or a broken or bending power-law (see e.g. Markowitz et al. 2003), where the normalization and the position of the break depend on the AGN physical parameters such as BH mass and accretion rate. Here we adopt the bending power-law model. Following McHardy et al. (2004b); Gonzalez-Martin & Vaughan (2012), the PSD is represented by the function: where A is the normalization factor and ν b is the break (or bending) frequency; the PSD thus has a logarithmic slope of -1 for ν << ν b which becomes -2 for ν >> ν b . The model PSD we adopt here is based on PSD studies of local AGNs. In principle, variability analysis of high-redshift AGNs should also test if these models are appropriate for the modelling of their X-ray variability properties. The estimation of the PSD of high-redshift AGNs is challenging, mainly due to the poor temporal sampling of the existing light curves. However, as we have argued in §4.2, the results plotted in Figure 7 already suggest that PSD models like the ones defined above are appropriate to describe the X-ray variability of the high redshift AGN. According to the model PSD, the lightcurve variance takes the form: where ν max and ν min are the highest and lowest rest-frame frequencies sampled by our lightcurves. In particular ν min = (1 + z)/∆t obs max ; ν max = (1 + z)/∆t obs min where ∆t obs max is the total duration of the lightcurve and ∆t obs min is the minimum sampled timescale. In our case ∆t obs max = (45, 127, 654, 6005) days for the short, intermediate, long and 7 Ms timescales, respectively. In the case of unevely sampled lightcurves, the choice of ∆t obs min is not obvious. We choose the minimum gap between consecutive observation in our lightcurve, corresponding to ∆t obs min = (0.25, 0.95, 0.25, 0.25) days, for the short, intermediate, long and 7 Ms timescales respectively. To link the variability of the AGN to its physical properties we explore four different variations of the PSD model defined by eq. 2: In summary, the first two models assume that only the break frequency depends on the AGN physical parameters, i.e. the BH mass (Model 1) or BH mass and accretion rate (Model 2). The last two models include a dependence of the PSD amplitude on the accretion rate as well. Model fit procedure We fit each one of the four models presented in §5.1 to the data points plotted in Figure 8, over all luminosities, timescales and redshifts simultaneously. Equation (3) was integrated over the range of rest-frame frequencies sampled in each redshift interval, as described in §5.1, in order to derive the excess variance σ 2 mod . The observed σ 2 N XS is an estimator of the intrinsic lightcurve variance σ 2 mod provided that we take into account the biases introduced by the sampling pattern (red-noise leakage and uneven sampling). We adopted the Allevato et al. (2013) recipe to correct for such biases, deriving the predicted excess variance as σ 2 pr ed = σ 2 mod /(C · 0.48 β−1 ), where β is the PSD slope below the minimum sampled frequency ν min , and C is a corrective factor dependent on the sampling pattern. In our models β was estimated as the average slope from ν = ν min /5 and ν min , i.e. the range from where most of the leaking power originates (cf. Allevato et al. 2013). The factor C ranges from 1 to 1.3 and allows us to account for the missing power due to the gaps in the lightcurve; we adopted the upper value of C = 1.3 adequate for the sparse sampling of CDF-S lightcurves and when β = 1.0 − 1.5 but we verified that changing the bias factor yields consistent results within ∆λ E dd ∼ 0.03, where higher accretion rates correspond to lower bias values. The only free parameter in our models is the average accretion rate λ E dd . For a given accretion rate we compute the bolometric luminosity L bol , using X-ray luminosity value of each point, according to the Lusso et al. (2012) prescription. Using λ E dd and L bol , we then compute the average BH mass for all AGNs which contributed to σ 2 N XS in each bin, and ν b and the normalization A according to each model prescription. Knowing ν b and A is then possible to compute σ 2 mod using equation (3) for each point in the σ 2 N XS − L X relations. The best-fit λ E dd is then found by χ 2 minimization of the differences between σ 2 N XS and σ 2 pr ed (we note again that we used σ 2 pr ed and not σ 2 mod , to take into account the bias due to sampling and red noise leak). We note that the CDF-S data used in the fit are not entirely independent. In fact, as discussed above, σ 2 N XS measures the integral of the PSD which, for the long timescale bins, includes some contribution coming from the same high frequencies sampled on short timescales. However this correlation is not expected to be strong, since the PSD has a steep negative slope so that, on each timescale bin, most of the power comes from frequencies close to the minimum sampled value. In any case the correlation would reduce the degrees-of-freedom of our data thus yielding lower probabilities and strengthening our conclusions about which models should be rejected. Model fit results Initially we fitted the data allowing a different λ E dd for each redshift bin. The best fit models are plotted in Fig. 8 and the best-fit results are listed in Table 1, together with the minimum χ 2 and the likelihood of the model. For completeness, we report the best-fit results in the case when we consider the CDF-S data only, and in the case when we add the Zhang (2011) data as well. The best-fit λ E dd values for the CDFS sources are identical (as each redshift bin is independent from the others), however the addition of Zhang's low redshift data constrain the models better (in terms of the null hypothesis probability). For that reason we discuss the best-fit results when we fit both the CDF-S and the low redshift data. From Fig. 8 we see that the models become steeper on the shortest timescales, due to the presence of the PSD break, and flatten on the longest one where most of the power comes from the ν −1 part of the PSD. The Zhang (2011) lightcurves, on the other hand, are not affected by time dilation due to their redshift z ≃ 0, so that the variability is not expected to anti-correlate with luminosity, since the PSD break falls outside the range of probed frequencies.. The best-fit results show that Model 3 is formally rejected at > 99% confidence level. Model 1 provides a statistically acceptable fit, but with an extremely low Eddington ratio, at all redshift bins except the [0.4 -1.03] bin where it is unconstrained. In fact, the resulting best-fit λ E dd values are so low, that even the low luminosity CDF-S AGN should have BH masses larger than ∼ 10 8 M ⊙ , at all redshifts to explain their observed luminosity. When we force the model to have any value of λ E dd > 0.03 results in the rejection of the model at the > 99% level. We therefore conclude that Models where the break-frequency depends on BH mass only (irrespective of whether the PSD amplitude depends on Eddington ratio or not), are not consistent with the data. Model 2 is also formally rejected and at the > 99% confidence level. We note that Models 1, 2 and 3 all show some tension with the CDF-S data at the lowest luminosities on long timescales (see Fig. 8), as their normalizations are too low. On the other hand, Model 4 reproduces rather well the overall trends and dependence of variability on luminosity and timescale, for the CDF-S data and its variable PSD normalization allows a better agreement with the observational measurements. The behaviour of λ E dd as a function of redshift is presented in Fig. 9 (we do not plot the Model 1 best-fit results, as they suggest an accretion rate which is either unconstrained or extremely low). Model 3 predicts an increase of the accretion rate from ∼ 0.07 in the local Universe up to almost 0.3 at redshifts higher than 3. However the error of the best-fit parameters are so large, that we cannot claim a significant indication of an increasing accretion rate with increasing z. In fact, even in the case of Model 4 (which provides the best-fit to the data) the best-fit λ E dd errors are so large that we cannot argue for a significant variation of the accretion rate with redshift. For that reason, we repeated the fits (to both CDF-S and Zhang 2011 data), keeping the accretion rate fixed to a common value at all redshift bins. The best fit results are listed in Table 2. The results are consistent with those presented above. The horizontal solid line in Fig. 9 indicate the best-fit λ E dd , which is very similar to the mean of the accretion rate values listed in Table 1. Model 1 fits the data well, but with a very small accretion rate (as before), while Model 3 best-fit is still rejected with a high confidence. This time, Model 2 is marginally accepted(at the 1% level), but it is still Model 4 which provides again the best fit. In fact, using the F−test, we can see that the improvement of the Model 4 best-fit when we let λ E dd free is not significant when compared with the best-fit in the case when λ E dd is kept constant. Summarising, our analysis supports the view that the variability amplitude of the high redshift AGN can be explained if we assume a power spectrum which is identical to the PSD of local AGNs (ie, the variability mechanism is the same in local and high-z AGN). In particular, the variability amplitude of the CDF-S AGN, and its dependence on luminosity, z and time scale, can be explained if the PSD break frequency ν b depends on both BH mass and λ E dd , in agreement with McHardy et al. (2006). Most probably, he PSD amplitude also depends on λ E dd as proposed by Ponti et al. (2012). The Eddington ratio of the AGN population seems consistent with a constant value, at all redshifts. The quality of our data cannot allow us to detect a dependence of λ E dd on z. Variability properties of high-redshift AGNs In this work we have analysed the lightcurves of AGNs in the CDF-S region, using a dataset spanning ∼ 17 years. The observing strategy of the CDF-S survey allows us to derive lightcurves with similar sampling for all sources, thus minimising the scatter introduced in timing analysis when different sampling patterns are used for different sources (Allevato et al. 2013). In order to assess the level of variability of our sources we used two different approaches, the first one based on Montecarlo simulations suited to assess whether a source is variable within a certain confidence level, and the second one based on excess variance analysis in order to measure the intrinsic average variability of the AGN population and link it to the physical properties of the AGNs themselves. We confirm results based on previous studies, that virtually all AGNs are variable, and only the data quality prevents us from detecting variability in faint sources: 90% of the sources with > 1000 net counts (e.g. ∼ 20 counts/epoch) are detected as significantly variable at the 95% confidence level. This result is due in large part to the long timescales probed in the CDF-S dataset, as the likelihood of detecting a source as variable increases with the sampled rest-frame timescale, as expected for sources with a red-noise PSD. In some local AGNs a low-frequency break, has been observed in the PSD, below which the PSD shape becomes flat and the variability power becomes approximately constant. The fact that, on average, the variability of our sources is still increasing on the longest timescales probed by our observations of ∼ 17 years, constrains the position of this lowfrequency break to be, on average over the sampled AGN population, at even longer timescales, in agreement with the results of Zhang (2011) and Middei et al. (2017). One of the most evident clues that AGN variability is related to the physical properties of the central BH is the discovery that variability anti-correlates with intrinsic AGN X-ray luminosity (and possibly also UV/optical luminosity, e.g. Collier & Peterson 2001;Kelly et al. 2009;MacLeod et al. 2010;Simm et al. 2016). In X-rays this ef- Figure 9. Eddington ratio estimates as a function of redshift from model fitting to the CDF-S and local AGNs variability ( Table 1). The solid horizontal lines and shaded areas represent the results of the fit with a constant λ E d d reported in Table 2 fect has been observed in samples of nearby AGNs and has been interpreted as the consequence of a larger BH mass in more luminous objects, which would also increase the size of the last stable orbit and thus smear the overall variability produced in the innermost parts of the accretion disk (Papadakis 2004). A clue that such a correlation holds also in higher redshift AGNs came from ROSAT observations (Almaini et al. 2000;Manners, Almaini & Lawrence 2002), and has been confirmed by observations of the CDF-S (Paolillo et al. 2004;Young et al. 2012;Yang et al. 2016), of the Lockman Hole (Papadakis et al. 2008), of the COS-MOS field (Lanzuisi et al. 2014) and by serendipitous XMM-Newton/Swift samples (Vagnetti et al. 2011(Vagnetti et al. , 2016. In order to study the dependence of variability on the physical properties of the AGNs, we adopted an 'ensemble' analysis approach. Based on the results of the simulation performed by Allevato et al. (2013) we focused on a subsample of relatively bright (S/N per bin > 0.8 or 350 counts) sources, in order to avoid introducing statistical biases in our analysis. We confirm the presence of an anti-correlation between variability and luminosity, but we fail to detect a (statistically significant) high-luminosity upturn suggested in some previous investigations of both the CDF-S sources (Paolillo et al. 2004) and of other samples (Papadakis et al. 2008;Vagnetti et al. 2011). Our analysis supports the view that high-redshift AGNs have a similar PSD to local sources, where the variability amplitude increases toward longer timescales. Furthermore, while globally the PSD could be represented by a simple power-law model, it seem to be better reproduced by a bending power-law model, similar to many local AGNs where a high-frequency break ν b is detected in the PSD. This is possibly the first direct indication that we can extend to high redshift the PSD behaviour observed for local sources. Given the complex interplay among variability, luminosity, redshift and timescale, we showed that a proper study of the behaviour of X-ray variability of high-redshift AGNs, its possible evolution and its dependence on the AGN physical parameters (i.e BH mass and accretion rate), must account for all these dependencies simultaneously. To this end we compared the average excess variance, over three orders of magnitude in X-ray luminosity and over 4 different timescales, further dividing the sample in 4 redshift bins, with the predictions from various PSD models. We ac-counted for both statistical uncertainties and systematic effects due to, e.g., the different timescales and energy ranges probed at each redshift, as well as from the sparse sampling pattern and red-noise leakage, following the recipe of Allevato et al. (2013). To better constrain the variability at low redshift, where the CDF-S survey covers a volume too small to detect a significant number of AGNs (i.e. z 0.02), we further included in the analysis the sample of local AGN studied by Zhang (2011), whose lightcurves are comparable to those of the CDF-S sources on the longest timescales. All models assume a bending power-law where the break frequency ν b depends on either the BH mass, or both the BH mass and the Eddington ratio λ E dd of the sources (McHardy et al. 2006). Additionally we tried models with either a fixed PSD normalization (Papadakis 2004;González-Martín et al. 2011), or a variable one depending on the Eddington ratio λ E dd (Ponti et al. 2012). Our results indicate that the variability of high-redshift AGNs is consistent, within the current uncertainties, with PSD represented by a bending power-law where the break frequency ν b depends on both the BH mass and the Eddington ratio λ E dd , in agreement with the results found by McHardy et al. (2006) for local sources. Our best fit model suggests that the PSD amplitude should depend on the accretion rate as well, as proposed by (Ponti et al. 2012); the tension between models with a fixed PSD amplitude and the CDF-S data on the longest timescales, observed also by Young et al. (2012) and by Yang et al. (2016), seem to support this conclusion. The data rule out the possibility that ν b depends only on the BH mass (Gonzalez-Martin & Vaughan 2012), irrespective of whether the PSD normalization is constant or not, unless we adopt implausibly low, average accretion rates. We note that understanding the dependence of the PSD on the physical parameters characterizing the AGN population is crucial if we intend to use AGN variability as a cosmological probe, as proposed for instance by La Franca et al. (2014), since if the measured variability has a dependence on the accretion rate this will introduce an additional scatter in the M BH − σ 2 N XS relation used to measure cosmological parameters, as well as a systematic redshiftdependent bias if the accretion rate changes with lookback time. Constrains on the BH accretion history We used the model fitting results to measure the average accretion history of AGNs. To probe a possible change of the Eddington ratio with lookback time we first allowed λ E dd to vary, finding that models with fixed PSD normalization show no λ E dd dependence on redshift, while models with a variable normalization suggest a possible increase up to z ∼ 2 − 3. Repeating the fits assuming a constant value for λ E dd , we obtain that a λ E dd ≃ 0.05 − 0.10 is consistent with the data for all the models except one. Given the large uncertainties, an increase of the Eddington ratio with lookback time is not ruled out by our data, but in any case the increase cannot be as strong as suggested by previous studies of AGN variability in X-ray selected samples, where it seemed that the high-redshift population at z > 2 could be dominated by near-Eddington accretors (e.g. Almaini et al. 2000;Manners, Almaini & Lawrence 2002;Paolillo et al. 2004;Papadakis et al. 2008). This difference is due in part to the improved statistical approach based on the simulation of Allevato et al. (2013), but also from a new and more physical modelling of AGN variability on different timescales, luminosities and redshift simultaneously, allowed by the extended CDF-S data. This suggests again that extreme care should be used in interpreting variability results based on small, sparse samples without taking into account all sources of bias. Arguably, the large uncertainties due to the limited sample size do not yet enable drawing definitive conclusions on the evolution of accretion with redshift but we point out that we are quantitatively constraining λ E dd (z) through X-ray variability measurements for the first time. The average Eddington ratios derived for the CDF-S sample are in good agreement with estimates in the literature. For instance Lusso et al. (2012), probing AGNs in the XMM-COSMOS survey find λ E dd in the range 0.015 − 0.26, dependent on both bolometric luminosity and AGN type. Their results are in even better agreement with ours considering that our median log(L bol ) ≃ 45.5 and for such luminosity the COSMOS sample yields a λ E dd = 0.07 − 0.12 inclusive of redshift and AGN type dependence. On the other hand Lusso et al. (2012) do not find any evidence of λ E dd evolution with redshift, although they probe AGNs with z 2.3 and are thus less sensitive to the higher redshift population than us. Brightman et al. (2013), exploring the COSMOS and E-CDF-S find Eddington ratios spanning a similar range of Lusso and collaborators, but skewed toward a slightly higher median λ E dd ∼ 0.15; this is likely a consequence of the somewhat higher median X-ray luminosity probed by their sample compared to this work, since their hard 2-10 keV log(L X ) ∼ 44.2 erg s −1 while our median value is log(L X ) = 43.6 erg s −1 . Similarly Suh et al. (2015) extend this type of study to a sample including the Lockmann Hole, finding a broad Eddington ratio distribution described by a lognormal distribution peaking at λ E dd ∼ 0.25. Again however this sample extends to higher log(L bol ) ∼ 47 than sampled by our work. Studies of bright quasars, such as the SDSS-DR7 sample of Shen et al. (2011), have also usually reported higher average Eddington ratios λ E dd > 0.1 with a sizeable fraction of sources accreting close to the Eddington rate (see, e.g. Figure 4 in Wu et al. 2015); however quasar samples are characterised by bolometric luminosities which are between one and two orders of magnitude greater than all the studies mentioned above, including our own. In the X-ray band luminous quasars have been monitored by Shemmer et al. (2014), finding that their variability is generally larger than expected from their luminosities, based on an extrapolation from lower luminosity sources in the 2 Ms observations of the CDF-S. While this may result from higher average Eddington ratios, it must be noted that unfortunately monitoring observations of such sources in the X-ray band are sparse and heterogeneous, performed through different observatories and in non-dedicated campaigns. This results in large uncertainties that prevented them from drawing definitive conclusions. We further point out that if PSD amplitude depends on λ E dd , as suggested by our data, the conclusions of Shemmer et al. (2014) (which used fixed PSD amplitude models) may need to be revised in favour of a lower Eddington ratio, although in at least two of their cases the Eddington ratio has been estimated, independently, from H β and the values are quite high. In fact, follow up observations of high-redshift QSO with Chandra are yielding results consistent with our CDF-S observations (Shemmer et al. 2017, submitted). Our variability-based estimates allow us to derive average λ E dd ; however it is clear that AGNs possess an intrinsic Eddington ratio distribution. For instance the work by Lusso et al. (2012) derives a lognormal λ E dd distribution after correcting for selection effects, while other authors (Aird et al. 2012;Bongiorno et al. 2012;Aird et al. 2013) claim that the intrinsic λ E dd distribution in galaxies can be represented by a power law function, which is independent of the host galaxy stellar mass, possibly with a soft cutoff around the Eddington limit. In any case the observed λ E dd is influenced by selection effects, and tends to display a lognormal distribution peaking at λ E dd ∼ 0.01 − 0.1. A recent revision of the work by Aird and collaborators (Aird et al. 2017) suggest that the λ E dd distribution is more complex than a simple power-law, peaking somewhere in the range λ E dd ∼ 0.01−0.1, although still ∼ Eddington-limited, and dependent on host galaxy type (e.g. star-forming vs quiescent). The same study also suggests that the average AGN activity shifts toward higher λ E dd with redshift, supporting the tentative trend observed in some of our models. All these results are based on X-ray selected samples of AGNs, but Vito et al. (2016) recently showed that BH accretion in individually Xray undetected galaxies is negligible compared to the accretion density measured in X-ray sources. Attempting to constrain the λ E dd distribution through variability is difficult due to the large scatter of variability measurements, which is intrinsic to stochastic processes; furthermore at present our analysis is strongly limited by the available statistics, as we have to divide our data into luminosity and redshift bins. This also limits our ability to explore the possible dependence of λ E dd on other parameters such as the AGN type and host galaxy. Finally, BH spin is the one physical parameter missing from any of the analyses discussed here, so that we are assuming an underlying spin distribution independent of luminosity, redshift, host-galaxy type etc. Although this is a simplistic approach, unfortunately the study of BH spins is beyond the reach of current facilities, except for a few local AGNs. We conclude pointing out that the limits of this study are mainly due to the sample size, which prevents us from reducing the statistical uncertainties. Future wide-field/large-effective-area facilities will enable making this method competitive with other tracers, allowing one to probe more effectively the luminosity, redshift and timescale dependence of the intrinsic AGN variability, and also to assess whether the average accretion (and possibly mass) may differ between, e.g., different galaxy populations at each redshift.
2017-08-01T15:35:40.000Z
2017-07-14T00:00:00.000
{ "year": 2017, "sha1": "d09370805b5be5aaa1c69ab994d6b1b62326153a", "oa_license": null, "oa_url": "https://helda.helsinki.fi/bitstream/10138/225100/1/watermark.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "87e47a3ff1d108256d47deee0eb51d96cdf4796a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1558250
pes2o/s2orc
v3-fos-license
Clinical Pharmacology: Advances and Applications Dovepress Epithelial Cell-adhesion Molecule-directed Trifunctional Antibody Immunotherapy for Symptom Management of Advanced Ovarian Cancer Despite advances in cytotoxic chemotherapy and surgical cytoreduction, disease recurrence continues to be a troubling problem in patients with advanced-stage epithelial ovarian cancer (EOC). Malignant ascites affects approximately 10% of patients with recurrent EOC and is associated with troublesome symptoms, including abdominal pressure, distension, dyspnea, pelvic pain, and bowel/bladder dysfunction. To date, no effective therapy has been identified for the treatment of malignant ascites in patients with recurrent, advanced-stage ovarian cancer. Recently, immune modulation has gained attention as a novel approach to anti-cancer therapy. This review explores the role of epithelial cell-adhesion molecule (EpCAM)-directed immuno-therapy, with a specific focus on the mechanism of action of the trifunctional antibody catumax-omab (anti-EpCAM × anti-CD3). In addition, clinical trials exploring the use of catumaxomab in the treatment of malignant ascites in patients with ovarian cancer are reviewed. Introduction Epithelial ovarian cancer (EOC) accounts for 25% of all malignancies affecting the female genital tract, and is the most lethal gynecologic malignancy. In 2013 there will be an estimated 22,240 new ovarian cancer cases in the USA, with 14,030 deaths. 1 Advanced-stage EOC is traditionally managed with cytoreductive surgery, followed by combination platinum-and taxane-based chemotherapy. 2 Despite aggressive treatment, the majority of these patients develop recurrent cancer, with selection of chemotherapy-resistant clones. 3 The subset of patients who develop recurrent disease is a population that traditionally faces extended exposure to multiple cytotoxic chemotherapy regimens, dictated by their disease-free interval. [4][5][6][7] Throughout this period, management of disease-associated morbidities becomes a priority in an effort to improve quality of life. Malignant ascites, which affects approximately two-thirds of patients with EOC, primary peritoneal cancer (PPC), and fallopian tube cancer (FTC), is common, and to date few effective therapies have been identified. 8,9 Importantly, ascites is associated with troublesome symptoms, including abdominal pressure and distension, dyspnea, pelvic pain, and bowel/bladder dysfunction. 10 Historically, malignant ascites in patients with EOC was treated utilizing diuretics and salt restriction, as well as intraperitoneal administration of sclerosing agents and radioactive isotopes. 11,12 Patients with malignant ascites secondary to EOC rarely suffer from fluid accumulation due to intraparenchymal liver metastasis, and portal hypertension is uncommonly identified. Rather, these patients traditionally exhibit reduced intravascular volume, making diuretic use an unattractive option. 13 With respect to the use of radioactive isotopes, poor tumor penetration and intestinal toxicity (necrosis and perforation) due to loculations and prolonged exposure have caused them to fall out of favor. Limited success rates, in combination with significant side effects, have resulted in the infrequent use of these modalities. Mechanical drainage of accumulated ascitic fluid via therapeutic paracentesis results in relief in up to 90% of patients. 14 However, recurrence/reaccumulation of ascites is common, and multiple paracentesis are required, with their associated risks of pain, visceral perforation, infection, and hematoma formation. 15,16 Furthermore, these patients are likely to have intra-abdominal adhesions as a result of their extensive surgical cytoreduction, and the resultant fluid loculations limit the therapeutic benefit derived. 17 Alternatively, placement of permanent intra-abdominal drains and peritoneovenous shunts has been explored. Experience with these modalities has been poor, as blockage of the shunt, infectious morbidity, as well as embolization and implantation of tumor cells in distant organs, were reported to be relatively common complications. 12,[18][19][20][21] Unlike other solid malignancies, where ascites portends a universally poor prognosis, patients with EOC and ascites at the time of diagnosis can expect 5-year survival rates approaching 40%. 15,22 This discrepancy is largely attributable to the biology of ovarian cancer, and the subsequent etiology of the abdominal fluid accumulation. Specifically, malignant ascites in patients with EOC is thought to be attributable to (1) lymphatic obstruction, (2) increased vascular permeability, (3) release of inflammatory cytokines, and (4) direct increase of fluid production by the cancer cells lining the peritoneal cavity. 9,[23][24][25] Given the above, exploration into immune modulation as a novel anti-cancer approach for the treatment of malignant ascites in patients with advanced-stage EOC has been a clinical priority. 9 Tumor immunology is complex, as activation of the immune system requires presentation of a foreign antigen via antigen-presenting cells (APCs) to T-cells. 26 In normal immune responses, the cytotoxic effects are driven by a combination of T-lymphocyte activity, antibody-dependent mechanisms, and natural killer (NK) cell activation. [26][27][28] Tumor cell destruction by tumor antigen-specific T lymphocytes has been demonstrated in vitro for a variety of both solid and hematologic malignancies. Certain cytokines, including interferon-γ and tumor necrosis factor (TNF), are essential components of the above response. Conversely, antibody-mediated cytotoxicity relies on direct antibody binding to tumor cell surfaces, and the subsequent recruitment of granulocytes and macrophages, both of which contain surface receptors for the fragment crystallizable (Fc) portion of the antibody. Analogously, activated NK cells contain surface receptors for antibody Fc regions, and participate in antibody-mediated cytotoxicity. Furthermore, activated NK cells secrete TNF-α, inducing hemorrhage and tumor necrosis. Unfortunately, tumor cells have adapted to escape immune destruction, utilizing tumor-related and host-related mechanisms. Cancer cells may fail to provide an antigenic target due to lack of an antigenic (foreign) epitope, lack of major histocompatibility complex-I molecule, or via antigenic modulation and tumor masking. 29,30 Host-related mechanisms include immune suppression, regulatory T-cell (Treg) suppression of tumor immunity, deficient epitope expression by host APCs, and failure of host immune cells to reach the tumor due to stromal barriers. 31 Ovarian cancer immunology An increasingly robust body of literature supports the contribution of immune cells in both ovarian cancer therapeutics and pathogenesis. [32][33][34][35][36][37][38][39][40][41][42] In 1988, immune studies performed on ovarian cancer specimens confirmed the presence of tumorinfiltrating lymphocytes (TILs). 43 Cluster of differentiation 3 (CD3)+ TILs have been found to independently predict tumor recurrence and prolonged survival. 42 Conversely, lack of TILs has been associated with poor survival. 34,[36][37][38][39][40] Certain subsets of T-cells exhibit a pro-cancer effect. Treg infiltration has been associated with higher cancer grade and advanced surgical stage. 44 Additionally, increasing numbers of circulatory Tregs identified in peripheral blood samples have been linked to disease progression. 45 These CD4+CD25+ Tregs are hypothesized to mitigate the immune response via two distinct mechanisms, inhibiting both cellcell contact and interleukin-2 transcription. 31 The impact of B-cell function on cancer immunology has been more difficult to discern as a result of conflicting data. [46][47][48][49] Epithelial cell-adhesion molecule (EpCAM) Epithelial cell-adhesion molecule (EpCAM) is a calciumindependent transmembrane, glycoprotein cell-adhesion molecule, with a molecular weight of 39-42 kDa. 50 Traditionally, it is expressed in normal epithelium, with the 57 Catumaxomab immunotherapy exception of squamous epithelium, epidermal keratinocytes, gastric parietal cells, myoepithelial cells, thymic cortical epithelium, and hepatocytes. 51,52 EpCAM is abundantly expressed on human cancers, and was first described as a dominant antigen in patients with colon carcinoma. 53,54 Since then, EpCAM overexpression has been associated with poor prognosis in patients with ovarian, breast, prostate, and gallbladder carcinoma, both functioning as an oncogene and suppressing CD4+ T-cell-dependent immune responses. [55][56][57][58] Initial interest in the utilization of EpCAM as a target for active immunotherapy emerged following a seminal publication reporting anti-tumor effects with the EpCAMspecific monoclonal antibody edrecolomab in patients with metastatic colorectal cancer. 58,59 Despite initial promise, the therapeutic impact of this approach was found to be inferior to traditional cytotoxic chemotherapy regimens in colon cancer patients. With respect to ovarian cancer, EpCAM expression has been demonstrated in the main histotypes, and is well documented in microarray studies. 60 In a study of 21 biomarkers in four distinct ovarian cancer subtypes (high-grade serous, clear cell, endometrioid, mucinous), only EpCAM exhibited consistent high expression. 60 Furthermore, data indicate that EpCAM appears to be a stable antigen in ovarian cancer patients, with preserved expression levels in primary, recurrent, and metastatic specimens, as well as malignant ascites and effusions. 61,62 The trifunctional antibody catumaxomab (anti-EpCAM × anti-CD3) In an effort to circumvent the limitations initially encountered with monoclonal antibodies (mAb), bispecific antibodies, which allow for simultaneous binding of both T-cell and accessory cells, were designed and tested. Ultimately, the design of trifunctional antibodies allowed for combination of two distinct anti-tumor functionalities (T-cell-mediated death and accessory cells). 27,63,64 Catumaxomab is a trifunctional mAb with two different antigen-binding sites and a functional Fc domain ( Figure 1). 65,66 It is composed of a mouse κ light chain, a rat λ light chain, a mouse immunoglobulin (Ig)-G2a heavy chain, and a rat IgG2a heavy chain. The two specific antigen-binding sites bind to epithelial tumor cells via the EpCAM and to T-cells via CD3. In addition, catumaxomab activates Fcγreceptor I-, IIa-, and III-positive accessory cells (dendritic cells [DCs], macrophages, and NK cells) via its functional Fc domain, resulting in a comprehensive and complex immune reaction ( Figure 1). 65,66 The functionality and selectivity of this novel antibody rely on the fact that tumor cells in malignant ovarian cancerassociated ascites have been shown to express EpCAM in 70%-100% of cases, while the mesothelial cells lining the peritoneal cavity lack expression. 67 Following EpCAM binding, catumaxomab results in recruitment and activation of immune effector cells, resulting in its antineoplastic activity. The two unique antibody-antigen binding sites of the trifunctional antibody enable recognition of both T-cells and tumor cells. The functional Fc domain can then activate neighboring Fcγ-receptor-positive macrophages, DCs, and NK cells, as described by Eissler et al in elegant animal model experiments. 68 Ultimately, tumor cell death results from cell lysis via perforin/granzyme-B, antibody-mediated cell death, and phagocytosis. 66,69,70 Catumaxomab has also been shown to induce anti-tumor immunity in animal models and in patients with peritoneal carcinomatosis. In a study conducted by Ströhlein and Heiss , tumor-reactive CD4+/CD8+ T-cells were induced after an initial treatment course with escalating doses of trifunctional antibodies, followed by re-stimulation 4 weeks after completion of initial treatment. [71][72][73] Catumaxomab in the treatment of ovarian cancer ascites (anti-EpCAM × anti-CD3) Malignant ascites affects approximately 10% of patients with recurrent EOC and is associated with troublesome symptoms. Intraperitoneal administration of catumaxomab was first studied in the treatment of eight patients (two of whom had ovarian cancer) with malignant ascites in 2005. 74 All patients had .2% EpCAM expression via flow cytometry, on nuclear ascites cells. Trifunctional antibodies were administered intraperitoneally over 6-8 hours, for at least four cycles. Seven of eight patients required no further paracentesis during follow-up or until death, with a mean paracentesis-free interval of 38 weeks (median 21.5, range 4-136). Clinical response, with disappearance of ascites accumulation, was seen in all patients, which was correlated with elimination of tumor cells (P = 0.0014). 74 Following this study, a multicenter phase I/II clinical trial was conducted evaluating the tolerability and efficacy of intraperitoneal catumaxomab in ovarian cancer patients with malignant ascites containing EpCAM-positive tumor cells. 75 Twenty-three women with recurrent ascites due to pretreated refractory ovarian cancer were treated with four to five intraperitoneal infusions of catumaxomab in doses of 5, 200 µg within 9 13 days. Treatment with catumaxomab resulted in significant and sustained reduction of ascites. Of the 23 patients, 22 did not require paracentesis between the last infusion and the end of study at day 37. 75 The most commonly reported grade 2/3 adverse events in the study included fever, nausea, and vomiting. Recently, a prospective, randomized phase II/III study was conducted comparing the efficacy of catumaxomab plus paracentesis with paracentesis alone in the treatment of malignant ascites. 67 Following paracentesis, catumaxomab was administered at doses of 10, 20, 50, and 150 µg on days 0, 3, 7, and 10, respectively, via an intraperitoneal catheter. The primary efficacy endpoint was puncture-free survival. Secondary efficacy parameters included time to next paracentesis, ascites signs and symptoms, and overall survival (OS). Puncture-free survival was significantly longer in the catumaxomab group (median 46 days) than in the control group (median 11 days) (hazard ratio [HR] 0.254; P , 0.0001), as was median time to next paracentesis (77 versus 13 days; P , 0.0001). Within the ovarian cancer cohort, median puncture-free survival was 52 days in the catumaxomab arm versus 11 days in the placebo arm (HR 0.205; P , 0.0001). In addition, catumaxomab-treated patients had fewer signs and symptoms of ascites than control patients. The most common adverse events included fever, abdominal pain, nausea, and vomiting. One patient had a grade 3 gastric hemorrhage. Findings from the above trials ultimately resulted in the European Medicines Agency approval of catumaxomab for the treatment of malignant ascites in patients with EpCAM-positive tumors for whom no standard therapy is available. 23 Immunomonitoring studies performed as part of the clinical trial were notable for a significant decline in EpCAMpositive tumor cells from a median screening value of 6,510 EpCAM-positive cells (165 patients) to a median of 27 cells on day 3 (133 patients), and to 0 cells (115 patients) on day 11 in the catumaxomab-treated arm. 76 In the control group, the tumor cell number increased from 9,373 EpCAMpositive tumor cells at screening (85 patients) to 18,929 EpCAM-positive tumor cells (74 patients) at the puncture visit. Furthermore, catumaxomab treatment was associated with a significant decline (63%; P , 0.001) in ascites fluid levels of vascular endothelial growth factor (VEGF), inhibiting vascular permeability, translating into decreased ascites fluid production. Lastly, CD69+ (indicative of lymphocyte proliferation), CD4+, and CD8+ T-cell populations increased more than 2-fold in catumaxomab-treated subjects. The activation of peritoneal T-cells and concomitant decline in EpCAM-positive tumor cells establishes a cellular basis for the anti-tumor immunologic effects of the trifunctional antibody catumaxomab. 76 The palliative nature of the treatment of malignant ascites in patients with recurrent ovarian cancer necessitates prioritization of quality of life during treatment. Wimberger et al conducted a post-trial ad hoc analysis of the above-described phase II/III study to determine the impact of catumaxomab on health-related quality of life (HR-QOL). 77 Deterioration in QOL scores appeared more rapidly in the control than in the catumaxomab group (median 19-26 days versus 47-49 days). The difference in time to deterioration in QOL between the groups was statistically significant for all scores (P , 0.01). The chronicity of disease in patients with recurrent malignant ascites related to ovarian carcinoma led to the exploration of intraperitoneal changes resulting from treatment with catumaxomab. In a small retrospective series, ten patients previously treated with intraperitoneal catumaxomab 59 Catumaxomab immunotherapy underwent repeat surgical exploration for secondary cytoreduction, treatment of anastomotic leaks or ileus, or for colostomy reversal. 78 Catumaxomab treatment was associated with severe intra-abdominal adhesions, grade 3. Conversely, no patients had ascites volume .500 mL despite extensive carcinomatosis in eight of ten subjects. Given the mouse-rat origin of catumaxomab, limitations in retreatment were anticipated due to formation of human anti-drug antibodies (HADA). However, in 2011, Pietzner et al described a case of successful re-treatment with catumaxomab for the management of malignant ascites. 79 A 74-year-old female patient with breast cancer and peritoneal carcinomatosis-associated ascites was treated with catumaxomab, with resolution of her symptoms. The patient remained puncture free for 45 days, and evaluation of HADA levels demonstrated increased levels after cycle 1, followed by a considerable decline and delayed increase in ascites HADA levels for each subsequent cycle. This experience suggested that a repeat cycle of catumaxomab might be feasible and effective in patients suffering from recurrent malignant effusions. More recently, Ott et al 80 conducted a post-trial ad hoc analysis in order to determine the impact of human antimouse antibodies (HAMA) levels 8 days after the last infusion on clinical outcomes in patients treated on the European Union Drug Regulating Authorities Clinical Trials (EudraCT) clinical trial (NCT00836654). 81 There was a strong correlation between humoral response and clinical outcome, with HAMA-positive patients showing improvement in median puncture-free survival, median time to next therapeutic puncture, as well as median OS. 80 Specifically, median OS in HAMA-positive ovarian cancer patients was 163 days, compared with 82 days in the HAMA-negative patients (P = 0.0123; HR 0.407). This finding remained significant in both the intention-to-treat and per-protocol analysis. Conclusion Despite aggressive surgical cytoreduction and adjuvant chemotherapy, disease recurrence continues to be problematic for patients with advanced-stage EOC. Immune modulation has gained significant attention in recent years as a novel anti-cancer approach. Catumaxomab is an innovative trifunctional antibody that relies on direct tumor cell and T-cell binding while simultaneously recruiting accessory immune cells to treat malignant ascites. This complex association results in tumor cell kill and simultaneously induces a humoral immune response. Currently, additional trials are being conducted to explore the safety of a 3-hour 83 Additionally, phase I/II clinical trials exploring combination treatment using catumaxomab and traditional cytotoxic chemotherapy are required, to determine if there is therapeutic efficacy to combined treatment. The ENGOT-ov8 study, 84 a multicenter prospective phase II clinical trial, is currently open, and is exploring the feasibility and clinical activity of intraperitoneal catumaxomab followed by systemic intravenous chemotherapy in patients with recurrent ovarian cancer. Ultimately, as with all novel therapies, symptom relief and treatment goals must be weighed against patient discomfort and adverse events. Careful patient selection, and identification of risk factors, to help reduce significant side effects associated with treatment are required. Disclosure The authors have no conflicts of interest to disclose. Clinical Pharmacology: Advances and Applications Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-pharmacology-advances-and-applications-journal Clinical Pharmacology: Advances and Applications is an international, peer-reviewed, open access journal publishing original research, reports, reviews and commentaries on all areas of drug experience in humans. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
2017-03-30T22:01:09.472Z
0001-01-01T00:00:00.000
{ "year": 2013, "sha1": "d9bff646740864f2a65682ad6d15f85d05f716d0", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=17705", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9bff646740864f2a65682ad6d15f85d05f716d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
115167115
pes2o/s2orc
v3-fos-license
Riemannian submersions with discrete spectrum We prove some estimates on the spectrum of the Laplacian of the total space of a Riemannian submersion in terms of the spectrum of the Laplacian of the base and the geometry of the fibers. When the fibers of the submersions are compact and minimal, we prove that the total space is discrete if and only if the base is discrete. When the fibers are not minimal, we prove a discreteness criterion for the total space in terms of the relative growth of the mean curvature of the fibers and the mean curvature of the geodesic spheres in the base. We discuss in particular the case of warped products. INTRODUCTION Let M be a complete Riemannian manifold and △ = div • grad be the Laplace-Beltrami operator acting on the space of smooth functions on M with compact support. The operator △ is essentially self-adjoint, thus it has a unique self-adjoint extension, to an unbounded operator, denoted by △, whose domain is the set of functions f ∈ L 2 (M) so that △f ∈ L 2 (M). Recall that the spectrum of a self-adjoint operator A, denoted by σ(A), is formed by all λ ∈ R for which A − λI is not injective or the inverse operator (A − λI) −1 is unbounded, [7]. In this paper we are going to study the spectrum of −△, (the operator △ is negative), and we refer to σ(−△) as the spectrum of M and in this case only, we denote by σ(M). It is important (in our study) to distinguish the various types of elements of the spectrum of M in order to have a better understanding of the relations between M and σ(M). This way, it is said that the set of all eigenvalues of σ(M) is the point spectrum σ p (M), while the discrete spectrum σ d (M) is the set of all isolated 1 eigenvalues of finite multiplicity. The essential spectrum σ ess (M) = σ(M) \ σ d (M) is the complement of the discrete spectrum. There are examples of warped manifolds R n × γ S 1 , with discrete spectrum, therefore σ ess (A 1 ) = ∅, see [1], but the spectrum σ ess (R n ) = [0, ∞). Riemannian manifolds whose Laplacian has empty essential spectrum are sometimes called discrete in the literature. In this paper we consider Riemannian submersions π : M → N and we prove some spectral estimates relating the (essential) spectrum of M and N. Riemannian submersions were introduced in the sixties by B. O'Neill and A. Gray (see [13,19,20]) as a tool to study the geometry of a Riemannian manifold with an additional structure in terms of certain components, that is, the fibers and the base space. When M (and thus also N) is compact, estimates on the eigenvalues of the Laplacian of M have been studied in [5], under the assumption that the mean curvature vector of the fibers is basic, i.e., π-related to some vector field on the basis. We will consider here the non compact case, assuming initially that the fibers are minimal. An important class of examples are Riemannian homogeneous spaces G/K, where G is a Lie group endowed with a bi-invariant Riemannian metric and K is a closed subgroup of G, see [19] for details. The projection G → G/K is a Riemannian submersions with totally geodesic fibers, and with fibers diffeomorphic to K. Another important class of examples of manifolds that can be described as the total space of Riemannian submersions with minimal fibers are the homogeneous 3-dimensional Riemannian manifolds with isometry group of dimension four, see [22]. This class includes the special linear group SL(2, R) endowed with a family of left-invariant metrics, which is the total space of Riemannian submersions with base given by the hyperbolic spaces, and fibers diffeomorphic to S 1 . Given a Riemannian submersion π:M → N with compact minimal fibers, we prove that σ ess (M) = ∅ ⇐⇒ σ ess (N) = ∅, see Theorem 1. This result coincides with Baider's result when M = X ×Y is a product manifold, Y is compact, N = X and π : X × Y → X is the projection on the first factor. Theorem 1. Let π : M → N be a Riemannian submersion with compact minimal fibers. Then A few remarks on this result are in order. First, we observe that for the inequality inf σ ess (M) ≤ inf σ ess (N), Lemma 3.7, we need only the compactness of the fibers with uniformly bounded volume, meaning that 0 < c 2 ≤ vol(F p ) ≤ C 2 for all p ∈ N. Second, the example of [1] shows that the assumption of minimality of the fibers is necessary in Theorem 1. In fact, one has examples of Riemannian submersions having compact fibers with discrete base and non discrete total space, or with discrete total space but not discrete base, see Example 4.2. In the second part of the paper we study the essential spectrum of the total space when the minimality assumption on the fiber is dropped. In this case, we prove that a sufficient condition for the discreteness of the total space is that the growth of the mean curvature of the fibers at infinity is controlled by the growth of the mean curvature of the geodesic spheres in the base manifold. In order to state our result, let us introduce the following terminology. The cut locus cut(p) of a point p in a Riemannian n-manifold is said to be thin, if its (n − 1)-Hausdorff measure zero, H n−1 (cut(p)) = 0. is proper then σ ess (M) = ∅. Here ρ p 0 is the distance function in N to p 0 . The Theorem 2 can be interpreted geometrically in terms of the mean curvature of the geodesic spheres in the base and the mean curvature of the fibers. Namely, the Laplacian of the distance function ρ p 0 (p) is exactly the value of the mean curvature of the geodesic sphere S p = ρ −1 p 0 ρ p 0 (p) at the point p. Thus, assumption says that the sum of the mean curvature of the geodesic balls in N and the mean curvature of the fibers must diverge at infinity. Theorem 1 is proved in Section 3 and Theorem 2 in Section 4. An alternative statement of Theorem 2 can be given in terms of radial curvature, see Corollary 4.2. There are two basic ingredients for the proof of our results. • The Decomposition Principle, that relates the fundamental tone of the complement of compact sets with the infimum of the essential spectrum, see Proposition 3.2; • Two estimates of the fundamental tones of open sets in terms of the divergence of vector fields, proved recently in [2] and [3], see Propositions 3.4 and 3.5. RIEMANNIAN SUBMERSIONS 2.1. Preliminaries. Given manifolds M and N, a smooth surjective map π : M → N is a submersion if the differential dπ(q) has maximal rank for every q ∈ M. If π : M → N is a submersion, then for all p ∈ N the inverse image F p = π −1 (p) is a smooth embedded submanifold of M, that will be called the fiber at p. If M and N are Riemannian manifolds, then a submersion π : M → N is called a Riemannian submersion if for all p ∈ N and all q ∈ F p , the restriction of dπ(q) to the orthogonal subspace T q F ⊥ p is an isometry onto T p M. Given p ∈ N and q ∈ F p , a tangent vector ξ ∈ T q M is said to be vertical if it is tangent to F p , and it is horizontal if it belongs to the orthogonal space (T q F p ) ⊥ . Let D = (T F ) ⊥ ⊂ T M denote the smooth rank k distribution on M consisting of horizontal vectors. The orthogonal distribution D ⊥ is clearly integrable, the fibers of the submersion being its maximal integral leaves. Given ξ ∈ T M, its horizontal and vertical components are denoted respectively by ξ h and ξ v . The second fundamental form of the fibers is a symmetric tensor S F : For any given vector field X ∈ X(N), there exists a unique horizontal X ∈ X(M) which is π-related to X, this is, for any p ∈ N and q ∈ F p , then dπ q ( X q ) = X p , called horizontal lifting of X. A horizontal vector field X ∈ X(M) is called basic if it is π-related to some vector field X ∈ X(N). If X and Y are basic vector fields, then these observations follows easily. Let us now consider the geometry of the fibers. First, we observe that the fibers are totally geodesic submanifolds of M exactly when S F = 0. The mean curvature vector of the fiber is the horizontal vector field H defined by where (e i ) k i=1 is a local orthonormal frame for the fiber through q. Observe that H is not basic in general. For instance, when n = 1, i.e., when the fibers are hypersurfaces of M, then H is basic if and only if all the fibers have constant mean curvature. The fibers are minimal submanifolds of M when H ≡ 0. Differential operators. Let π :M → N be a Riemannian submersion. Besides the natural operations of lifting a vector or vector fields in N to horizontal vectors and basic vector fields one has that functions on N can be lifted to functions on M that are constant along the fibers. Such operations preserves the regularity of the lifted objects. One can also (locally) lift curves in the base γ : [a, b] → N to horizontal curves γ : [a, c) → M with the same regularity as γ with arbitrary initial condition on the fiber F γ(a) . We will need formulas relating the derivatives of π-related objects in M and N. Let us start with divergence of vector fields. The following relation holds between the divergence of X and X at p ∈ N and q ∈ F p . In particular, if the fibers are minimal, then div M ( X) = div N (X). Proof. Formula (2.2) is obtained by a direct computation of the left-hand side, using a local orthonormal frame e 1 , . . . , e k , e k+1 , . . . , e k+n of T M, where e 1 , . . . , e k are basic fields. The equality follows using equalities (a) and (c) in Subsection 2.1, and formula (2.1) for the mean curvature. Given a smooth function f : N → R, denote byf = f • π : M → R its lifting to M. It is easy to see that the gradient grad Mf off is the horizontal lifting of the gradient grad N f . If we denote with a tilde X the horizontal lifting of a vector field X ∈ X(N), then the previous statement can be written as given a function f : M → R, one can define a function f av : N → R by averaging f on each fiber where dF p is the volume element of the fiber F p relative to the induced metric. We are assuming that this integral is finite. As to the gradient of the averaged function f av , we have the following lemma. Lemma 2.2. Let p ∈ N and v ∈ T p N and denote by V the smooth normal vector field along F p defined by the property dπ q (V q ) = v for all q ∈ F p . Then, for any smooth function f : M → R Proof. A standard calculation as in the first variation formula for the volume functional of the fibers. Notice that when f ≡ 1, then f av is the volume function of the fibers, and (2.4) reproduces the first variation formula for the volume. Observe that, in (2.4), the gradient grad M f need not be basic or even horizontal 2 . An averaging procedure is available also to produce vector fields X av on the basis out of vector fields X defined in the total space. If X ∈ X(M), let X av ∈ X(N) be defined by Observe that the integrand above is a function on F p taking values in the fixed vector space T p N. If X ∈ X(M) is a basic vector field, π-related to the vector field X * ∈ X(N), then (X av ) p = vol(F p ) · (X * ) p , where vol denotes the volume. Using the notion of averaged field, equality (2.4) can be rewritten as Remark 2.3. From the above formula it follows easily that the averaged mean curvature vector field H av vanishes at the point p ∈ N if and only if p is a critical point of the function z → vol(F z ) in N. This happens, in particular, when the leaf F p is minimal. When all the fibers are minimal, or more generally when the averaged mean curvature vector field H av vanishes identically, then the volume of the fibers is constant. Proof. Suppose first that h is smooth. By the Divergence Theorem, Fubini's Theorem for Riemannian submersions and 2.4 we have In fact, a gradient is basic if and only if it is horizontal. If h ∈ L 2 (N) there exists a sequence of smooth functions h k ∈ C ∞ (N) converging to h with respect to the L 2 -norm. On the other hand Since h k → h in L 2 (N) then 2.5 holds. Observe that we used that the volume of the minimal fibers is constant, see Remark 2.3. SPECTRAL ESTIMATES The proof follows easily from (2.2) applied to the vector fields X = grad Mf and X * = grad N f , using (2.3). Decomposition Principle. Let K ⊂ M be a compact set of the same dimension of M. The Laplace-Beltrami operator △ of M acting on the space C ∞ 0 (M \ K) of smooth compactly supported functions of M \ K has a self-adjoint extension, denoted by △ ′ . The Decomposition Principle [11] says that σ ess (M) = σ ess (M \ K). On the other hand, . We will show that inf σ ess (M) ≤ µ. To that we will suppose that µ < ∞, otherwise there is nothing to prove. Let K 1 ⊂ K 2 ⊂ · · · be a sequence of compact sets In particular, σ ess (M) is empty if and only if given any compact exhaustion Let H be a Hilbert space and A : D ⊂ H → H be a densely defined self-adjoint operator. Given λ ∈ R, we write A ≥ λ if Ax, x ≥ λ x 2 for all x ∈ D. By the Spectral Theorem for (unbounded) self-adjoint operators, we have that A ≥ λ iff σ(A) ⊂ [λ, +∞). Let us write A > −∞ if there exists λ * ∈ R such that A ≥ λ * . Lemma 3.3. Let A : D ⊂ H → H be a self-adjoint operator with A > −∞, and let λ ∈ R be fixed. Assume that for all ε > 0 there exists an infinite dimensional subspace G ε ⊂ D such that Ax, x < (λ + ε) x 2 for all x ∈ G ε . Then, This lemma is well known, see [8] but for sake of completeness we present here its proof. Proof. First we will show that σ(A) ∩ (−∞, λ] = σ(A) ∩ [λ * , λ] = ∅. Take ε k = 1/k, k ≥ 1. By our hypothesis there exists x k = 0 such that Ax k , x k < (λ + 1/k) . . , n, and set X = i H i ⊂ D. This is clearly an invariant subspace of A. Since X has finite dimension, then D = X ⊕ X 1 where X 1 = X ⊥ ∩ D is also invariant by A. Denote by A 1 the restriction of A to the Hilbert space X 1 which is still self-adjoint. Clearly, σ(A 1 ) = σ(A)\{λ 1 , . . . , λ n } and σ ess (A 1 ) = σ ess (A). In particular, we have σ(A 1 ) ∩ (−∞, λ] ⊂ σ ess (A 1 ). Using the infinite dimensionality of the space G ε , it is now easy to see that the assumptions of our lemma hold for the operator A 1 , and the first part of the proof applies to obtain Let us recall from [2] and [3] the following estimates for the fundamental tone of open sets of Riemannian manifolds. Proposition 3.4. Let Ω ⊂ M be an open set of a Riemannian manifold. Then where the supremum in taken over all smooth vector fields X in Ω satisfying inf Ω div(X) > 0, and sup Ω X < +∞. Remark 3.6. Propositions (3.4) and (3.5) hold for vector fields X of class In particular, if the fibers are minimal, then Proof. Let ε > 0 and choose f ε ∈ C ∞ 0 (Ω) such that (3.8) Let us consider the functionf ε = f ε • π. By the assumption that the fibers of π are compact,f ε has compact support in M. Using Fubini's Theorem for submersions we have Similarly, using (2.3), we have Using (3.8), (3.9) and (3.11), we then obtain This proves (3.5). If all the fibers are minimal (or more generally if the averaged mean curvature vector field H av vanishes identically on N, see Remark 2.3), then the volume of the fibers is constant, and inequality (3.6) follows from (3.5). To prove the inequality (3.7) we pick a compact subset K ⊂ M and set K 0 = π(K) and let K = π −1 (K 0 ). The set K is compact by the assumption that the fibers of π are compact. Let Ω = N \ K 0 and Ω = π −1 (Ω) = M \ K. Clearly, Ω ⊂ M \ K and thus λ * ( Ω) ≥ λ * (M \ K). Hence, using (3.5) we get Taking the supremum over all compact subset K ⊂ M in the left-hand side, we obtain the desired inequality. Now consider the case that the fibers of the submersion π : M → N are compact and minimal. Proof. In view of (3.6), it suffices to show the inequality λ * ( Ω) ≥ λ * (Ω). To this aim, we will use the estimate in (3.4). We observe initially that it suffices to prove the inequality when Ω is bounded. Namely, the general case follows from λ * (Ω) = lim n→∞ λ * (Ω n ), by considering an exhaustion of Ω by a sequence of bounded open subsets Ω n . Note that Ω is bounded if and only if Ω is bounded, by the compactness of the fibers. Let f be the first eigenfunction of the problem △ N u + λu = 0 in Ω with Dirichlet boundary conditions, that can be assumed to be positive in Ω. Set X = −grad N log f , so that div N (X) − |X| 2 = λ 1 (Ω) is constant in Ω. If X is the horizontal lifting of X, then clearly | X q | = |X π(q) | for all q ∈ Ω. Moreover, by Lemma 2.1, since H = 0, div M ( X) q = div N (X) π(q) . Other interesting examples of applications of Theorem 1 arise from non compact Lie groups. Example. Consider the 2 × 2 special linear group SL(2, R). There exists a 2-parameter family of left-invariant Riemannian metrics g κ,τ , with κ < 0 and τ = 0, for which SL(2, R), g κ,τ → H 2 (κ) is a Riemannian submersion with geodesic fibers diffeomorphic to the circle S 1 . An explicit description of these metrics can be found, for instance, in [24]. Endowed with these metrics, SL(2, R) is one of the eight homogeneous Riemannian 3-geometries, as classified in [22], and its isometry group has dimension 4. MEAN CURVATURE OF GEODESIC SPHERES VERSUS MEAN CURVATURE OF THE FIBERS. PROOF OF THEOREM 2. We will now drop the minimality and the compactness assumption on the fibers, however, we will make some assumptions on the curvature of the base and the fibers of the submersion. Assume that (N, g N ) has a pole p 0 or more generally has a point p 0 with thin cut locus, see the Introduction. For p ∈ N \ {p 0 }, let γ p : [0, 1] → N be the unique affinely parameterized geodesic in (N, g N ) such that γ p (0) = p 0 and γ p (1) = p. The radial curvature function of (N, g N , p 0 ), denoted by κ p 0 : N → R, is defined by κ p 0 (p) = max σ sec(σ), where sec is the section curvature and the maximum is taken over all 2-planes σ ⊂ T p N containing the direction γ ′ p (1). Finally, let us denote by ρ p 0 : N → [0, +∞) the distance function in N given by ρ p 0 (p) = dist N (p, p 0 ). We are now ready for Proof of Theorem 2. Assume first that π : M → N is a Riemannian submersion satisfying the following assumptions: (a) (N, g N ) has a pole p 0 . (b) the function h(q) = (△ N ρ p 0 ) π(q) + g N (grad N ρ p 0 ) π(q) , dπ q (H q ) is proper. If p 0 has thin cut locus, the same proof above holds, since X = grad M ρ p 0 satisfies the Proposition 3.4 and therefore 4.1, see Remark 3.6. Proof. Using the Hessian Comparison Theorem [14,Chapter 2], under the assumption (4.2) one has: Considering an orthogonal basis of T p N of the form {∇ N ρ p 0 , e 1 , . . . , e n−1 }, where {e 1 , . . . , e n−1 } is an orthonormal basis of T p S p , and taking the trace of the symmetric bilinear forms in the two sides of (4.4), we get It is clear that (4.3) implies that h(q) = △ N ρ p 0 + g N grad N ρ p 0 , dπ q (H q )) is proper. 4.1. Warped products. Let (N, g N ) and (F, g F ) be Riemannian manifolds and let ψ : N → R + be a smooth function. The warped product manifold M = N × ψ F is the product manifold N ×F endowed with the Riemannian metric g N + ψ 2 g F . It is immediate to see that the projection π : M → N onto the first factor is a Riemannian submersion, with fiber F p = {p} × F . Among Riemannian submersions, warped products are characterized by the following properties: • the horizontal distribution is integrable, and its leaves are totally geodesic; • the fibers are totally umbilical. For warped products, the results of the paper can be stated in a more explicit form in terms of the warping function f . The mean curvature of the fibers are given by where ψ is the lifting of ψ. Proof. Part (a) follows from Proposition 3.7, observing that the volume of the fiber F p = {p} × F equals ψ(p) dim(F ) vol(F ). Part (b) follows from Theorem 2 and formula (4.6). ii. The limit µ = lim sup r→∞ log(vol(B W (r))) r < ∞, where B W (r) is the geodesic ball centered at a point p = (0, ξ) ∈ W and radius r. The items i.and ii. imply by Brooks' Theorem [6] that σ ess (W ) = ∅. This gives an example of a Riemannian submersion π : (W, dw 2 ) → (R n , ds 2 ) where the base space is discrete but the total space is not, while the fiber is compact but not minimal. An example of a Riemannian submersion π : (R n ×S 1 , dw 2 ) → (R n , ds 2 ) where the total space is discrete but the base space is not, while the fiber is compact but not minimal is presented in [1,Proposition 4.3].
2010-01-06T10:45:07.000Z
2010-01-06T00:00:00.000
{ "year": 2012, "sha1": "c18b40be21fbbf16aa8c9c8135d400e20b9a33ac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.0853", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c18b40be21fbbf16aa8c9c8135d400e20b9a33ac", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7455420
pes2o/s2orc
v3-fos-license
Lessons from Fungal F-Box Proteins The F-box domain, so called after a conserved domain found in human cyclin F (5), was described in 1996 (6) after first being denoted a conserved N-terminal domain found in a subset of proteins (110). The F-box hypothesis was introduced shortly after (162, 185) and holds that F-box-containing proteins (henceforth F-box proteins) act as scavengers in the cell, collecting “junk” proteins to deliver to a “waste processor,” called the SCF complex, to which they dock through their F-box domain. In the SCF complex, the junk proteins are marked with ubiquitin for “incineration” in the proteasome. F-box proteins do not act indiscriminately but recruit specific, often modified proteins to the SCF complex and in this way regulate the level of certain proteins in a cell. F-box proteins are found in all eukaryotes and display a large variety of functions. In fungi they are, for example, involved in control of the cell division cycle, glucose sensing, mitochondrial connectivity, and control of the circadian clock. F-box proteins are commonly identified by the presence of a stretch of primary sequence that matches the consensus for an F-box domain (Fig. ​(Fig.1).1). However, it can be questioned whether just the occurrence of an F-box domain in a protein sequence is sufficient to assume compliance with the F-box hypothesis. The F-box hypothesis is based on the assumption that an F box mediates assembly into an SCF complex through binding to the Skp1 subunit (Fig. ​(Fig.2).2). The SCF complex consists of Skp1 (suppressor of kinetochore protein mutant) (34), Cul1 (Cullin) (135), Rbx1 (ring-box protein) (86), and an F-box protein and catalyzes (like other E3 ligases), in cooperation with the E1 and E2 enzymes, the transfer of the small protein ubiquitin to the target protein (108, 162). Among fungi, the methods of regulation of the SCF complexes and hence the methods of regulation of the F-box proteins in these complexes appear to differ. SCF complexes are likely activated and regulated through a recycling mechanism (35), which involves three main contributors: the neddylator protein DCN1, responsible for the transfer of Nedd8 to Cul1 (112, 133, 155, 159, 174, 211); the deneddylator CSN (COP9 signalosome) (136, 137, 146, 179, 200) (reviewed in references 159, 201, and 208); and the CAND1 protein (127, 215), which binds to deneddylated Cul1 and competes out the Skp1-F-box complex from the core of the SCF complex. A new round of neddylation removes CAND1 and thereby creates binding space for a new Skp1-F-box complex. In budding yeast (Saccharomyces cerevisiae), deletion mutants for Nedd8 and CSN5, the CSN subunit responsible for the deneddylation reaction, are both viable (36, 115, 124). This means that, although the components of the SCF recycling mechanism are present, this process is not required for survival. A second difference in budding yeast in comparison to other fungi is that is does not have the CAND1 protein, adding to the notion that in budding yeast recycling acts differently. In fission yeast (Schizosaccharomyces pombe), CAND1 is present, and Nedd8 is required for survival (155), but the CSN5 subunit is not (145, 216). Apparently, in fission yeast, the neddylation reaction is required for proper SCF function, but deneddylation is not, suggesting that in fission yeast an alternative deneddylation may be present. In Neurospora crassa, a deletion mutant for subunit 2 of CSN, Δcsn-2, is viable but lacks a normal circadian rhythm and conidiation (64), while in Aspergillus nidulans, four CSN mutants, including one for subunit 5 (Δcsne), all lack fruiting body development (25, 26). Together, these data suggest that, in filamentous fungi, proper recycling of the SCF is strictly required only for certain developmental processes, in accordance with the requirements of CSN in development in more-complex, multicellular organisms (reviewed in reference 179). FIG. 1. F-box consensus sequence. The motif is about 45 amino acids long and based on the HMM logo for the F-box motif (178). Highly conserved amino acids are underlined, and the two most conserved amino residues, the leucine and proline at positions 6 and 7, ... FIG. 2. SCF complex and ubiquitination of target proteins. The SCF complex functions within the ubiquitination reaction through combined action with the E1 and E2 enzymes. F-box proteins bind to Skp1 via their F-box domain and to targets via their C-terminal ... Some F-box proteins appear to function without binding to Skp1, suggesting that not all F-box proteins take part in an SCF complex. This also means that not all proteins interacting with an F-box protein will be ubiquitinated and proteasomally degraded. In another deviation from the F-box hypothesis, some F-box protein/Skp1 complexes do not seem to be involved in ubiquitination. Furthermore, even when an F-box domain mediates assembly into an SCF complex, the result may be self-ubiquitination rather than fulfillment of a scavenger function. Since 1996, several review articles covering the emerging theme of ubiquitin-mediated protein degradation and the widespread occurrence of F-box proteins have been published (38, 68, 71, 72, 92, 118, 198, 204, 205). Here, we discuss fungal F-box proteins, including their targets (if identified), and when possible, classify these F-box proteins according to degree of compliance with the F-box hypothesis. Most literature on fungal F-box proteins covers those found in budding yeast and, to a lesser extent, fission yeast, but important findings have also been reported for filamentous ascomycetes. In Table ​Table1,1, fungal F-box proteins described in the literature are listed according to their main cellular function. The distantly related budding and fission yeasts share 10 (likely) orthologous F-box proteins. Budding yeast contains an additional 11 F-box proteins and fission yeast 7 (89). Cdc4, Grr1, and Met30 from budding yeast and their counterparts in other fungi are the most studied fungal F-box proteins and are conserved throughout the fungal kingdom. In total, 31 F-box proteins are discussed, exclusively from ascomycetes: the “model” fungi S. cerevisiae, S. pombe, Kluyveromyces lactis, A. nidulans, Hypocrea jecorina, and N. crassa and the pathogenic fungi Candida albicans, Fusarium graminearum, Fusarium oxysporum, and Magnaporthe grisea. TABLE 1. Fungal F-box proteins described in the literature and discussed in this review The F-box domain, so called after a conserved domain found in human cyclin F (5), was described in 1996 (6) after first being denoted a conserved N-terminal domain found in a subset of proteins (110). The F-box hypothesis was introduced shortly after (162,185) and holds that F-box-containing proteins (henceforth F-box proteins) act as scavengers in the cell, collecting "junk" proteins to deliver to a "waste processor," called the SCF complex, to which they dock through their F-box domain. In the SCF complex, the junk proteins are marked with ubiquitin for "incineration" in the proteasome. F-box proteins do not act indiscriminately but recruit specific, often modified proteins to the SCF complex and in this way regulate the level of certain proteins in a cell. F-box proteins are found in all eukaryotes and display a large variety of functions. In fungi they are, for example, involved in control of the cell division cycle, glucose sensing, mitochondrial connectivity, and control of the circadian clock. F-box proteins are commonly identified by the presence of a stretch of primary sequence that matches the consensus for an F-box domain (Fig. 1). However, it can be questioned whether just the occurrence of an F-box domain in a protein sequence is sufficient to assume compliance with the F-box hypothesis. The F-box hypothesis is based on the assumption that an F box mediates assembly into an SCF complex through binding to the Skp1 subunit (Fig. 2). The SCF complex consists of Skp1 (suppressor of kinetochore protein mutant) (34), Cul1 (Cullin) (135), Rbx1 (ring-box protein) (86), and an F-box protein and catalyzes (like other E3 ligases), in cooperation with the E1 and E2 enzymes, the transfer of the small protein ubiquitin to the target protein (108,162). Among fungi, the methods of regulation of the SCF complexes and hence the methods of regulation of the F-box proteins in these complexes appear to differ. SCF complexes are likely activated and regulated through a recycling mechanism (35), which involves three main contributors: the neddylator protein DCN1, responsible for the transfer of Nedd8 to Cul1 (112,133,155,159,174,211); the deneddylator CSN (COP9 signalosome) (136,137,146,179,200) (reviewed in references 159, 201, and 208); and the CAND1 protein (127,215), which binds to deneddylated Cul1 and competes out the Skp1-F-box complex from the core of the SCF complex. A new round of neddylation removes CAND1 and thereby creates binding space for a new Skp1-Fbox complex. In budding yeast (Saccharomyces cerevisiae), deletion mutants for Nedd8 and CSN5, the CSN subunit responsible for the deneddylation reaction, are both viable (36,115,124). This means that, although the components of the SCF recycling mechanism are present, this process is not required for survival. A second difference in budding yeast in comparison to other fungi is that is does not have the CAND1 protein, adding to the notion that in budding yeast recycling acts differently. In fission yeast (Schizosaccharomyces pombe), CAND1 is present, and Nedd8 is required for survival (155), but the CSN5 subunit is not (145,216). Apparently, in fission yeast, the neddylation reaction is required for proper SCF function, but deneddylation is not, suggesting that in fission yeast an alternative deneddylation may be present. In Neurospora crassa, a deletion mutant for subunit 2 of CSN, ⌬csn-2, is viable but lacks a normal circadian rhythm and conidiation (64), while in Aspergillus nidulans, four CSN mutants, including one for subunit 5 (⌬csne), all lack fruiting body development (25,26). Together, these data suggest that, in filamentous fungi, proper recycling of the SCF is strictly required only for certain developmental processes, in accordance with the requirements of CSN in development in more-complex, multicellular organisms (reviewed in reference 179). Some F-box proteins appear to function without binding to Skp1, suggesting that not all F-box proteins take part in an SCF complex. This also means that not all proteins interacting with an F-box protein will be ubiquitinated and proteasomally degraded. In another deviation from the F-box hypothesis, some F-box protein/Skp1 complexes do not seem to be involved in ubiquitination. Furthermore, even when an F-box domain mediates assembly into an SCF complex, the result may be selfubiquitination rather than fulfillment of a scavenger function. Since 1996, several review articles covering the emerging theme of ubiquitin-mediated protein degradation and the widespread occurrence of F-box proteins have been published (38,68,71,72,92,118,198,204,205). Here, we discuss fungal F-box proteins, including their targets (if identified), and when possible, classify these F-box proteins according to degree of compliance with the F-box hypothesis. Most literature on fungal F-box proteins covers those found in budding yeast and, to a lesser extent, fission yeast, but important findings have also been reported for filamentous ascomycetes. In Table 1, fungal F-box proteins described in the literature are listed according to their main cellular function. The distantly related budding and fission yeasts share 10 (likely) orthologous F-box proteins. Budding yeast contains an additional 11 F-box proteins and fission yeast 7 (89). Cdc4, Grr1, and Met30 from budding yeast and their counterparts in other fungi are the most studied fungal F-box proteins and are conserved throughout the fungal kingdom. In total, 31 F-box proteins are discussed, exclusively from ascomycetes: the "model" fungi S. cerevisiae, S. pombe, Kluyveromyces lactis, A. nidulans, Hypocrea jecorina, and N. crassa and the pathogenic fungi Candida albicans, Fusarium graminearum, Fusarium oxysporum, and Magnaporthe grisea. F-BOX PROTEINS COMPLYING WITH THE F-BOX HYPOTHESIS To date, the best-described fungal F-box proteins comply with the F-box hypothesis; the targets of these F-box proteins are commonly first phosphorylated before being recognized and ubiquitinated by the SCF complex and finally degraded by the proteasome. In a study (113) in which interactors of Skp1 and the SCF complex were identified in budding yeast, 13 F-box proteins were found to bind Skp1, and these 13 could all be copurified with an SCF complex. These are Cdc4, Ctf13, Dia2, Grr1, Hrt3, Mdm30, Rcy1, Ufo1, and five uncharacterized F-box proteins. In another study investigating the binding partners of Skp1 and Cul1 (181), Met30 and Saf1 were also found to bind Skp1, and two more uncharacterized F-box proteins were found to bind Skp1 and/or Cul1. In the study mentioned above (113), autoubiquitination of F-box proteins was also investigated using two different E2 enzymes, Cdc34 and Ubc4. Twelve out of the 13 F-box proteins showed self-ubiquitination; only Grr1 was found not to be ubiquitinated, and Dia2 and Mdm30 showed very little ubiquitination. Also, it was demonstrated that these F-box proteins were differentially ubiquitinated by the two different E2 enzymes and that differ-ent numbers of ubiquitin molecules were attached to the F-box proteins. In other reports, ubiquitination and degradation of Cdc4, Met30, and Grr1 were demonstrated (55,217). It is still an open question whether F-box proteins are ubiquitinated and degraded together with their targets in each degradation round. Another possibility is that they are recycled after recruiting their targets and ubiquitinated and degraded only when unbound to a target protein. In a study with fission yeast, 11 F-box proteins investigated (Pop1/2, Pof1, Pof3, Pof5, Pof7, Pof8, Pof9, Pof10, Pof12, Pof13, and Fbh1/Pof15) (120) could all bind to Skp1. The interactions were further studied with a temperature-sensitive mutant of Skp1 with point mutations in the Skp1-F-box interaction core. Only the binding to Pof1, Pof3, and Pof10 was weaker with this mutant than with the wild-type Skp1. The effect of this weakened binding on the function of the individual F-box protein is not known, and targets of most of these fission F-box proteins remain to be identified. C-TERMINAL PROTEIN-PROTEIN INTERACTION DOMAINS Most of the fungal F-box proteins that comply with the F-box hypothesis have a recognizable C-terminal protein-protein interaction domain (Table 1). Four F-box proteins carry a WD40 domain: Cdc4 (and its orthologs), Fwd1, Ufo1, and Met30 (and its orthologs). WD40 is a domain of about 40 amino acids often terminating with the two amino acids Trp and Asp (WD) and forms a beta-propeller structure (150). Grr1 and its orthologs carry an LRR (leucine-rich repeat) domain (85), a repeat of about 25 amino acids forming a nonglobular, crescent-shaped structure. Saf1 carries an RCC1 (regulator of chromosome condensation 1) repeat (168), another domain forming a beta-propeller structure involved in protein-protein interactions. Of the F-box proteins that comply with the F-box hypothesis, only Mdm30 does not contain any known protein-protein interaction motif, and it is unknown how it interacts with its targets Fzo1, Mdm34, and Gal4c. The presence of a recognizable protein-protein interaction domain might be an indication that an F-box protein complies with the F-box hypothesis. One of the uncharacterized budding yeast F-box proteins (Ylr352w) that binds Skp1 and Cul1 also contains an LRR domain, suggesting that this F-box protein might also comply with the F-box hypothesis. For Cdc4, Dia2, Grr1, Met30, Mdm30, Saf1, and Ufo1 of budding yeast, one or more targets are known, and most of these targets are degraded via the SCF complex. This suggests that at least these seven F-box proteins completely fulfill the F-box hypothesis. For Cdc4, Dia2, Grr1, and Met30, homologs in other FIG. 1. F-box consensus sequence. The motif is about 45 amino acids long and based on the HMM logo for the F-box motif (178). Highly conserved amino acids are underlined, and the two most conserved amino residues, the leucine and proline at positions 6 and 7, respectively, are indicated in red. At each position, amino acids are ordered from top to bottom by decreasing occurrence in F-box domains. FIG. 2. SCF complex and ubiquitination of target proteins. The SCF complex functions within the ubiquitination reaction through combined action with the E1 and E2 enzymes. F-box proteins bind to Skp1 via their F-box domain and to targets via their C-terminal domain, thereby presenting the target for ubiquitination. Ub, ubiquitin. 678 MINIREVIEWS EUKARYOT. CELL fungal species have been found and characterized, in some cases together with their targets. The degree of conservation of the functions of these F-box proteins between the different species can now be assessed by comparing the different phenotypes of deletion mutants and the conservation of targets. Cdc4: AN F-BOX PROTEIN CONTROLLING THE CELL DIVISION CYCLE, MORPHOGENESIS, NUTRIENT SENSING, AND CALCIUM SIGNALING In S. cerevisiae, Cdc4 (cell division cycle 4) regulates multiple processes in the cell by recruiting various proteins for degradation (Fig. 3), and especially in the cell cycle process, Cdc4 plays an important role by recruiting different cell cycle inhibitors, a transcription factor, a cyclin, and a replication factor for degradation. The CDC4 gene was first identified from a yeast mutant unable to initiate DNA replication during transition from the G 1 to the S phase (213). Cdc4 AND CELL DIVISION CYCLE Regulation of the transition from one cell cycle phase to the next involves proteins that inhibit or promote progression. Proteins from both categories have to be degraded at some point, either to ensure progression or to prevent premature initiation of a new phase. Cdc4 is required for the degradation of Sic1 and Far1, proteins that inhibit cell cycle progression, and for the degradation of Cdc6, a protein that promotes progression (97). Sic1 (substrate/subunit inhibitor of cyclindependent protein kinase 1) is phosphorylated by its inhibition a SEC10, a domain of approximately 650 residues long found in proteins of the eukaryotic exocyst complex, which specifically affects the synthesis and delivery of secretory and basolateral plasma membrane proteins (126); CAAX motif, a C-terminal prenylation motif (where C is the prenylated cysteine, A is usually aliphatic, and X may be many different residues) (3). b ?, unknown or uncertain. target, Cdc28, and another kinase, Pho85 (153,209). Phosphorylated Sic1 is recognized by Cdc4 and marked for degradation by polyubiquitination (49,197). Recently, the transcription factor Swi5, which activates transcription of SIC1 (93), was also found to be degraded through interaction with Cdc4. Degradation of the transcription factor Swi5 via Cdc4 during the early G 1 phase allows efficient removal of Sic1 in the late G 1 phase (93). This means that Cdc4 is responsible for Sic1 removal by degrading both the protein itself and the activator of its transcription. Through Sic1 degradation, Cdc4 also regulates expression of OCH1, a gene encoding alpha-1,6-mannosyltransferase, suggesting that Cdc4 is involved in regulation of cell wall composition during the cell cycle (40). Far1 (factor arrest 1) is also phosphorylated by the Cdc28 kinase complex and then recognized by Cdc4 (67). Degradation of Far1 is nucleus specific, suggesting that Cdc4 may act specifically in the nucleus (20). Cdc6 (cell division cycle 6) is a DNA replication initiation factor that is degraded via Cdc4 in the late G 1 /early S phase as well as in the G 2 /M phase. Phosphorylation of Cdc6 to ensure recognition by Cdc4 at both time points requires the Cdc28 kinase. Cdc6 degradations at these two time points differ in the degradation rates and in the cyclins that take part in the Cdc28 kinase complex (42). The first difference is probably due to the fact that degradation depends on two different interaction domains in Cdc4 (163). It has also been suggested that Cdc4 has a role in the degradation of Clb6 (79), a cyclin that triggers, together with Clb5, the progression from G 1 into S phase. Clb6 is rapidly degraded at the end of the S phase and stabilized in cdc4 mutants. Moreover, its sequence harbors Cdc4 degron motifs. Direct interaction, how-ever, has not yet been demonstrated. Cdc4 also targets another F-box protein involved in kinetochore assembly and function, Ctf13 (see Ctf13: a Kinetochore Assembly F-Box Protein). S. pombe contains two homologs of CDC4, called POP1 and POP2 (polyploidy 1 and 2) (102). POP2 was also discovered in another study, where it was called SUD1 (stops unwanted diploidization 1) (80). Pop1 and Pop2 are structurally related but function independently from each other. The phenotypes of both deletion mutants are comparable in that both display polyploidization, but neither protein cannot fully take over the function of the other since overexpression of POP1 or POP2 could not suppress the defects caused by loss of the other gene (101). The polyploidization phenotype is caused by the accumulation of the cyclin-dependent kinase (CDK) inhibitor Rum1 (80, 102) (homolog of budding yeast Sic1) and S-phase regulator Cdc18 (80, 207) (homolog of budding yeast Cdc6). These two proteins are normally degraded during defined stages of the cell cycle, but in the pop1 and pop2 mutants and the pop1 pop2 double mutant, the levels of these proteins are high compared to those in the wild type. The accumulation and polyubiquitination of Rum1 and Cdc18 in a proteasome-deficient mutant support the notion that Pop1 and Pop2 recruit these two proteins for degradation. Also, a direct interaction between Pop1 and Cdc18 was found using coimmunoprecipitation. Pop1 and Pop2 can form homodimers and heterodimers, resulting in three alternative SCF complexes, SCF Pop1/Pop1 , SCF Pop1/Pop2 , and SCF Pop2/Pop2 , but the different molecular functions of these three complexes remain unclear (101). The S-phase cyclin Cig2 (homolog of budding yeast cyclin Cln2) is also stabilized in pop1 and pop2 deletion mutants, suggesting a role for both proteins in Cig2 degradation (210). Coimmunoprecipitation revealed that Pop1 and Cig2 interact in the cell independently from Pop2 and that this interaction requires phosphorylation of Cig2 and at least the central 93 amino acid residues (residues 181 to 273). In fungi, Pop1 and Pop2 are currently the only examples of two homologous F-box proteins functioning in the same degradation pathway. The advantage of having two F-box proteins for the same function might be that degradation of certain proteins can be fine-tuned and regulated at an extra level. The methods of degradation of Cig2 are remarkably different between budding and fission yeasts, since Cln2 is degraded by Grr1 in budding yeast (see Grr1 and the Cell Cycle) but its fission yeast homolog Cig2 is degraded by Pop1 (the role of Pop2 in degradation of Cig2 is still unclear). Cdc4 AND PSEUDOHYPHAL GROWTH Cdc4 is also involved in degradation of transcription factors that regulate pseudohyphal growth. The transcription factor Tec1 (transposon enhancement control 1) is phosphorylated by mitogen-activated protein kinase Fus3 and then recognized by Cdc4, promoting its degradation (32). Tec1 is responsible for the onset of filamentous growth in S. cerevisiae. This morphological switch is made when nutrient availability is low, but the switch needs to revert after pheromone sensing to allow mating. Whether Cdc4 is solely responsible for degradation of Tec1 is disputable, because it has been shown that another F-box protein, Dia2, is also able to induce degradation of Tec1 after pheromone sensing (9). In the human pathogen C. albicans, Cdc4 plays a role in the switch from hyphal to yeastlike growth, as demonstrated by deletion of CDC4, which results in constitutive hyphal growth (4). This dimorphic switch is important for pathogenicity, as the hyphal form contributes to the ability to penetrate the body and cause candidemia. In contrast to that of budding yeast, the CDC4 deletion mutant of C. albicans is viable and does not display an arrest in the G 1 phase. Possibly, Cdc4 in C. albicans has fewer targets than its budding yeast counterpart or accumulation of the same targets in a C. albicans ⌬cdc4 mutant does not (fully) inhibit growth. Similarity between the Cdc4 proteins of the two yeasts is 18% for the first 300 amino acids and 48% from residues 360 to 743, which includes the F-box domain and the WD40 motif. The two proteins indeed appear to have different functions, since CDC4 from C. albicans cannot complement the cdc4 strain of budding yeast (182). The constitutive hyphal growth phenotype of the C. albicans cdc4 strain is not due to the accumulation of C. albicans Far1, the homolog of the Cdc4 target Far1 in budding yeast. Sol1, the closest homolog of Sic1 in C. albicans, is degraded via Cdc4, but high levels of Sol1 are not responsible for the constitutive hyphal growth of the cdc4 mutant. Another possible target of Cdc4 involved in filamentous growth is Tec1, but no elevated levels were observed in C. albicans cdc4 mutants (4). This means that the target of Cdc4 in C. albicans whose removal is required for the dimorphic switch has not yet been identified and could be different from a Cdc4 target in budding yeast. Cdc4 AND GROWTH RESPONSES AFTER NUTRIENT SENSING The transcription factors Hac1 (homologous to Atf/Creb1) and Gcn4 (general control nonderepressible 4) are both involved in the activation of unfolded-protein-responsive genes whose products assist in the folding of proteins in the endoplasmic reticulum (ER) lumen upon ER stress. Hac1, a basic leucine zipper transcription factor, is degraded in the nucleus via Cdc4 when ER stress is removed (158). Gcn4 is also required for the activation of transcription of amino acid and purin biosynthesis genes during starvation. After the switch from poor to rich medium, Gcn4 is degraded via Cdc4, likely after being phosphorylated by Pho85 (139). The degree of conservation of this Cdc4 function is unclear because the respective targets have not been studied in this respect for other fungi. Cdc4 AND CALCIUM SENSING Cdc4 acts in calcium homeostasis by targeting Rcn1 for destruction upon calcium availability (94). Rcn1 (regulator of calcineurin 1) inhibits calcineurin, a phosphatase that mediates cellular responses after stress and Ca 2ϩ uptake (41). Calcineurin mediates its own inhibition by a negative-feedback loop: it stimulates the expression of RCN1 and stabilizes Rcn1 by dephosphorylation. Phosphorylation of Rcn1 makes it recognizable for Cdc4 and marks it for degradation, allowing calcineurin to break out of its negative-feedback loop and increase its activity. From the details described above, it is clear that between yeasts, Cdc4 is conserved in some functions, like the degradation of the CDK inhibitors Sic1 and Far1. Whether the function of Cdc4 in regulation of pseudohyphal growth is conserved between budding yeast and C. albicans is uncertain, since the target protein in this pathway in C. albicans has not yet been found. The involvement of Cdc4 in nutrient sensing and calcium signaling in filamentous fungi is unlikely, since deletion of CDC4 in these fungi has not been reported to lead to defects in these processes. A main difference between budding yeast and the other fungi is that in budding yeast the search for targets has been more intensive, for example, by use of yeast two-hybrid screens (93). Application of such screens to other fungi would be helpful to more fully evaluate the conservation of targets between the different fungi. Although no genetic studies of the Cdc4 homolog in N. crassa have been reported, this homolog was found to be targeted by a plant-defensive peptide. By use of a yeast twohybrid screen, defensin 1 from Pisum sativum was shown to interact specifically with Cdc4 (131). Defensins are plant peptides exhibiting an antifungal activity as part of the plant innate immune system. Interaction with Cdc4 can explain how defensin 1 inhibits fungal growth, namely, by interference with the fungal cell cycle. By use of microscopy, it was observed that defensin 1, tagged with a fluorophore, was localized in the nuclei of N. crassa and Fusarium solani, suggesting that defensin 1 can enter the fungal cell and interfere in the nucleus with the cell division cycle. Dia2: AN F-BOX PROTEIN INVOLVED IN DNA REPLICATION The F-box protein Dia2 (digs into agar 2) plays a role in DNA replication in S. cerevisiae and is thereby also involved in cell growth and division. As mentioned earlier, Tec1, a transcription factor regulating filamentation genes, is degraded via Dia2 (9), probably in joint action with Cdc4 (32). These two F-box proteins are also both capable of degrading ectopically expressed human cyclin E (99), even though they bear different protein-protein interaction domains (LRR and WD40, respectively). Deletion of DIA2 in budding yeast causes a defect in invasive and pseudohyphal growth, slower growth at low temperatures, early entry into the S phase, and accumulation of DNA damage (98). These defects were also observed in DIA2⌬ F-box mutants, suggesting that binding of Dia2 to Skp1 is necessary for these functions. Dia2 binds both early-and latefiring origins and is thereby involved in resetting the origin. It recruits the SCF complex to the replication origins, suggesting that a possible target becomes ubiquitinated there. Yra1, previously described as a protein involved in mRNA export (189), is an interaction partner of Dia2 and is required for Dia2 function at replication origins (191). Possibly, the SCF Dia2 complex binds origins with the assistance of Yra1. In another study (18), deletion of DIA2 resulted in accumulation of DNA damage after the collapse of replication forks. This suggests that a possible target of Dia2 may be found among proteins that interfere with replication fork stability in certain genomic regions. Also, genetic interactions of Dia2 were found with DNA replication, repair, and checkpoint pathways (18). A role for Dia2 in DNA repair was suggested as well by the requirement of Dia2 for resistance to certain DNA-damaging compounds. These observations indicate that there are likely more targets or functions of Dia2 than only targeting Tec1 for degradation. Pof3 from S. pombe is the ortholog of Dia2 from budding yeast. Deletion of POF3 results in multiple phenotypes: G 2phase delay (probably due to activation of the DNA damage checkpoints), hypersensitivity to UV radiation, telomere dysfunction, and chromosome instability and segregation defects (89). Targets of Pof3 are not yet known but may be found among proteins playing a role in chromatin structure and/or function. Fission yeast does not have a Tec1 ortholog, so targets different from Tec1 must be responsible for the phenotype of the deletion mutant. A protein that was found to interact with Pof3 is Mcl1, ortholog of the budding yeast S-phase regulator Ctf4 (134). Mcl1 is a protein essential for chromosome maintenance and contains WD40 repeats and SepB boxes (100,206). A ⌬mcl1 strain shows phenotypes similar to those of the ⌬pof3 mutant. Normally, Mcl1 is not rapidly degraded in wildtype cells, and no ubiquitination of Mcl3 could be demonstrated, suggesting that Mcl1 is not a target of Pof3. This is also in accordance with the fact that the two deletion mutants share the same phenotype, something that would not be expected if Mcl1 were a target of Pof3. Dia2 is conserved not only in fission yeast but also in filamentous fungi, suggesting a well-conserved function (BLAST searches and our observations). It would be worthwhile to investigate whether and how Dia2 regulates DNA replication in filamentous fungi. Grr1: AN F-BOX PROTEIN INVOLVED IN GLUCOSE AND AMINO ACID SENSING, CELL DIVISION CYCLE, MEIOSIS, AND RETROGRADE SIGNALING Grr1 (glucose repression resistant 1) in S. cerevisiae plays a role in a large number of cellular processes: retrograde signaling, pheromone sensitivity and cell cycle regulation, nutritionally controlled transcription, glucose sensing, and cytokinesis (122) (Fig. 4). Grr1 was initially found in budding yeast through a mutation causing resistance to glucose repression, with a deletion mutant additionally showing growth defects (7,52). Grr1 AND THE CELL CYCLE That Grr1 is involved in cell cycle control was shown by the accumulation of the cyclins Cln1 and Cln2 in a GRR1 deletion mutant (12,95,116,177,180,186). Degradation of the cyclins Cln1 and Cln2 via Grr1 is required after completion of the G 1 phase or when cells have to arrest in G 1 , for instance, after pheromone sensing. Binding of Grr1 to Cln2 has been established in multiple ways, but binding to Cln1 has never been detected, suggesting that Grr1 might target Cln1 indirectly. Another target of Grr1 is Gic2, a protein that accumulates throughout the G 1 phase and reaches its peak just before bud emergence. At that time, Cdc42, a Rho-related GTP-binding protein required for polarized growth of the cytoskeleton during bud emergence, is activated and binds Gic2. When the bud has emerged, polarized growth ceases and Gic2 is degraded to avoid morphological defects. Only Gic2 bound to Cdc42 can be phosphorylated and eventually recognized by Grr1 (81). During cytokinesis, the process of cell separation, Grr1 is responsible for the degradation of Hof1. Hof1 first forms a ring around the bud neck of the mother cell and then forms another ring in the daughter cell. Just after septum formation and separation, Hof1 normally disappears (196). Grr1 is recruited to the mother bud neck and binds to Hof1 after the activation of the mitotic exit network (19). This suggests that Grr1 not only is active in the nucleus and cytoplasm but also can be recruited to specific cellular structures. The Grr1 protein of C. albicans is 46% identical to Grr1 from budding yeast, and C. albicans GRR1 can fully complement a yeast ⌬grr1 strain. A ⌬grr1 strain of C. albicans exhibits pseudohyphal growth under yeastlike growth-inducing conditions and does not grow on glucose (123). The constitutive pseudohyphal growth phenotype (27) of the ⌬grr1 deletion strain could be explained by the stabilization of the two G 1 cyclins Ccn1 and Cln3. That Grr1 mediates degradation of Ccn1 and Cln3 was demonstrated by the fact that both cyclins are stabilized; additionally, Cln3 was found as a hyperphosphorylated protein in the ⌬grr1 strain. Elevated levels of Hof1 were also detected in the ⌬grr1 strain. These data suggest that Grr1 function in degradation of cyclins and Hof1, as well as glucose uptake, is conserved between the two yeast species. The fact that the genes from C. albicans can functionally replace budding yeast GRR1 suggests that interactions of Grr1 with its targets are conserved between the two yeasts. Grr1 AND MEIOSIS Generally, the cell cycle is closely connected to the availability of nutrients. In low-glucose medium, diploid cells tend to undergo meiosis and sporulation rather than grow and divide. A role for Grr1 in preventing untimely meiosis and sporulation was demonstrated in a study of the degradation of Ime2, a protein kinase required for multiple steps throughout the sporulation process (166). In a ⌬grr1 mutant, accumulation of (nonubiquitinated) Ime2 was found and meiosis still occurred, even under high-glucose conditions. In A. nidulans, GRRA, the ortholog of budding yeast GRR1, was found in a subtraction hybridization screen aimed at identification of genes that are specifically expressed during fruiting body development (107). GRRA is able to complement the ⌬grr1 phenotype in yeast partially or, when the gene is overexpressed, almost fully. Complemented phenotypes of the yeast deletion mutant include the morphological abnormalities and changes in gene expression upon a carbon source shift (107). This demonstrated that GrrA from A. nidulans is probably able to bind endogenous targets in budding yeast, suggesting that these interactions are still conserved within GrrA. However, the phenotype resulting from deletion of GRRA in A. nidulans is quite different from that of the yeast mutants. A. nidulans ⌬grrA mutants showed impaired ascosporogenesis, asexual conidiation, and sexual development, while displaying a normal vegetative growth. A similar phenotype was observed with CSN subunit mutants of A. nidulans (25,26). This suggests that CSN may be involved in the functioning of GrrA in A. nidulans. From further cytological examination, it was concluded that meiosis, giving rise to cro-zierlike structures that contain diploid nuclei, does not take place in the grrA mutant. In striking contrast, meiosis does occur in the budding yeast grr1 mutant, even under meiosissuppressing conditions. In light of this, it would be interesting to investigate whether and how the Ime2 ortholog in A. nidulans is involved in GrrA-controlled sexual development. In the plant-pathogenic fungus F. graminearum (Gibberella zeae), the ortholog of budding yeast Grr1, denoted Fbp1, was found in a restriction enzyme-mediated insertion screen for nonpathogenic mutants (60). The virulence of fbp1 mutants on barley heads was severely reduced compared to that of the wild type, and growth on potato dextrose agar and carrot agar produced less mycelium. Furthermore, Fbp1 plays a role in sexual reproduction. FBP1 deletion caused a loss of perithecium formation as females in self-crosses and smaller and fewer perithecia as a male in the outcross. The asci contained incomplete octads of abnormal spores and did not segregate in a one-to-one manner. Deletion constructs lacking the F box, the LRR, or both domains were nonfunctional both in the interaction with F. graminearum Skp1 and yeast Skp1 and in the ability to complement the sexual reproduction deficiency. Clearly, also in F. graminearum, protein turnover is required for sexual reproduction, but whether the ortholog of budding yeast Ime2 is involved is not known. Grr1 from budding yeast was unable to complement the knockout phenotype. Conversely, FBP1 from F. graminearum was able to partially complement the yeast grr1 mutant. This suggests that during evolution, Grr1 has retained the ability to bind at least some heterologous targets, despite their diversification. Grr1 is likely conserved as a pathogenicity VOL. 8, 2009 MINIREVIEWS factor in plant-pathogenic fungi. In a screen for pathogenicity genes of M. grisea using insertional mutagenesis, one mutant had a disruption in a GRR1 ortholog, PTH1 (pathogenicity 1), and a subsequent deletion of this gene resulted in reduced disease symptoms toward barley (192). Why Grr1 is required for full pathogenicity in this fungus is not known. Grr1 IN GROWTH ON GLUCOSE AND NONGLUCOSE CARBON SOURCES In addition to regulating meiosis, Grr1 from S. cerevisiae conducts other glucose availability-related functions. When high levels of glucose are sensed, Grr1 not only initiates the degradation of Ime2 but also activates hexose permeases (HXT) that allow the rapid import of glucose. Activation of HXT expression is achieved by the degradation of Std1 and Mth1, which promote the repression of HXT genes by binding to the repressor Rtg1. Upon glucose sensing, Std1 and Mth1 are phosphorylated, recognized by Grr1, and degraded. Free, unbound Rtg1 can then be phosphorylated, promoting an intramolecular interaction in Rtg1 that prevents DNA binding (165), thereby releasing repression of the HXT genes (91). Although not yet fully understood, the process of Snf1 protein kinase inactivation is also required for degradation of Std1 and Mth1 (160). In K. lactis, Grr1 was characterized as an F-box protein required for glucose signaling, just as described for budding yeast (70). Complementation of S. cerevisiae ⌬grr1 with K. lactis GRR1 showed full restoration of the growth and morphological defects of the deletion strain, demonstrating that GRR1 from K. lactis is a functional homolog of budding yeast GRR1. It was also shown that K. lactis GRR1 controls the levels of Sms1, the single ortholog of Mth1 and Std1, the budding yeast Grr1 targets. The Sms1 level decreased dramatically after glucose addition, suggesting rapid degradation of Sms1 to allow expression of the hexose transporter genes. Other targets of Grr1 in K. lactis have not yet been found, but as the complementation of S. cerevisiae ⌬grr1 with K. lactis GRR1 shows, Grr1 from K. lactis is probably able to bind targets in budding yeast. These targets also include Mth1 and Std1, suggesting that these interactions are still conserved, even though in K. lactis only Sms1 is present. In addition to activating genes required for glucose uptake, Grr1 from S. cerevisiae is required for the assimilation of alternative carbon sources (52). Grr1 mediates this process by recruiting Gis4, a target that is ubiquitinated but not degraded (117). The ubiquitinated form of Gis4 binds and activates phosphorylated forms of Snf1, which results in derepression of several genes required for the assimilation of alternative carbon sources. Gis4 is a rare example of a target that is not degraded after ubiquitination but is instead activated. This shows that although Grr1 function generally complies with the F-box hypothesis, ubiquitination of Gis4 is an exception to this rule. Whether and how Gis4 is phosphorylated before recognition by Grr1 and how it is rescued from degradation after addition of ubiquitin are not known. Grr1 also regulates other metabolic processes in the cell through its involvement in the degradation of Tye7 (131a) and Pfk27 (15). Tye7 is a transcription factor that activates several glycolytic genes (152), and Pfk27 synthesizes the second messenger fructose-2,6-biphosphate (154). After glucose deple-tion, the removal of these proteins via Grr1 probably facilitates the switch from glycolysis to gluconeogenesis. This shows that Grr1 is active not only during glucose availability but also during glucose depletion. Furthermore, together with Mdm30, another F-box protein, Grr1 regulates the activation of the Gal4 transcription activation complex. This complex regulates the transcription of genes involved in galactose assimilation. Degradation of the Gal4 isoforms Gal4a and Gal4b via Grr1 is required when glucose becomes available and galactose assimilation is shut down (147). This was demonstrated by deletion of GRR1, which results in stabilization of Gal4a/b and increased activation of Gal4 targets. Grr1 AND AMINO ACID SENSING Grr1 also plays a role in amino acid sensing by promoting the expression of several amino acid permease genes upon amino acid availability (17,77). The activation of amino acid permease genes is mediated by the transcription factors Stp1 and Stp2. These two proteins are cleaved after activation of the Ptr3/Ssy5 amino acid sensing pathway and transported to the nucleus (130). Stp1 cleavage depends on Grr1, suggesting that Grr1 targets a protein that normally inhibits cleavage. A candidate might be Ssy5, which is involved in the amino acid permease expression pathway and which is normally degraded upon amino acid availability. On the other hand, higher protein levels of Stp2 were found in ⌬grr1 cells than in wild-type GRR1 cells (15). Grr1 AND RETROGRADE SIGNALING Mitochondrial retrograde signaling (RTG) is a pathway connecting mitochondria to the nucleus, allowing cells to react to changes in the functional state of mitochondria. The RTG pathway targets two transcription factors, Rtg1 and Rtg3. These two proteins form heterodimers and activate RTG-responsive genes (128). Grr1 functions in this pathway by degradation of Msk1, a negative regulator that inhibits localization of Rtg1 and Rtg3 to the nucleus (129). Grr1 targets Mks1 only when it is unbound to either Rtg2 or Bmh1, a 14-3-3 protein. When the RTG pathway is off, Bmh1 protects Mks1 and allows it to inhibit Rtg1 and Rtg3. When the pathway is on, Mks1 instead binds to Rtg2 and is thereby inactivated. Degradation of free Mks1 via Grr1 ensures that the switch is quick and under tight control. Grr1 seems to be conserved among yeasts, considering the cell cycle and glucose uptake. Regarding meiosis, however, the role of Grr1 in budding yeast and the role of Grr1 in filamentous fungi seem opposite of each other. Another difference between Grr1 in yeasts and Grr1 in filamentous fungi is involvement in glucose uptake, since, for example, Mig1 and Snf1 repression in filamentous fungi is different from that in budding yeast (28,172). Regarding other functions, like amino acid sensing and retrograde signaling, hardly anything is known about Grr1 involvement in other fungi, partly because of a lack of knowledge but probably also partly because these processes are differently regulated. Once again, fewer studies to find Grr1 targets have been carried out for other fungi than for budding yeast (15). 684 MINIREVIEWS EUKARYOT. CELL Hrt3: AN F-BOX PROTEIN ENHANCING METHYLMERCURY RESISTANCE Hrt3 (high-level expression reduces Ty3 transposition) and Ylr224w in S. cerevisiae both promote resistance to methylmercury, a highly toxic compound (75). Overproduction of these two F-box proteins elevated resistance to the toxic compound, in contrast to 15 other F-box proteins studied. This resistance required the F-box domain of these two proteins and also the proteasome, suggesting that degradation of a target protein is involved. Targets of Hrt3 or Ylr224w that could explain the roles of these F-box proteins in methylmercury resistance have not yet been identified. Interactions of Hrt3 other than with ubiquitin conjugation proteins were with alcohol dehydrogenase (Adh2) and Idh1, a subunit of mitochondrial NAD ϩdependent isocitrate dehydrogenase, which catalyzes the oxidation of isocitrate to alpha-ketoglutarate in the tricarboxylic acid cycle (73,109). The biological relevance of the interaction with these two catabolism-related proteins is not clear, but they might be involved in sensitivity to methylmercury. Other interactions with ribosomal proteins Rpl12A and Guf1 and a phosphatase functioning in the G 1 /S-phase transition were found. Hrt3 is conserved in the entire fungal kingdom (BLAST searches and our observations), suggesting that it serves a fundamental function in fungi. Still, its characterization is limited; only the overexpressing phenotype was investigated for S. cerevisiae. Investigation of a (conditional) deletion mutant of S. cerevisiae or other fungi will be crucial to further explore the functions of Hrt3. Mdm30 (mitochondrial distribution and morphology) and Mfb1 (mitochondrion-associated F-box protein) in S. cerevisiae both control membrane fusion dynamics of mitochondria. The membranes of mitochondria continuously undergo fusion and fission to maintain a dynamic morphology. A target of Mdm30 is Fzo1 (mitofusin), a membrane-bound GTPase involved in membrane fusion. Fzo1 is ubiquitinated and targeted to the proteasome in an Mdm30-controlled manner (33). Another target of Mdm30 is Mdm34, a mitochondrial outer membrane protein (157). Interaction between Mdm30 and Mdm34 is essential for growth on nonfermentable carbon sources and for normal mitochondrial morphology. If and how ubiquitination of Mdm34 contributes to these functions are not yet understood. Additionally, Mdm30 (alternatively called Dsg1 [does something to Gal4]) is required for destruction of Gal4c, the inhibitory isoform of Gal4, and thereby plays a role in carbon assimilation (147) together with Grr1, which is required for the degradation of Gal4a/b, the Gal4 active isoforms. A ⌬dsg1 strain shows elevated levels of Gal4c, which results in the inability to use galactose as a carbon source. Mdm30 binds to Skp1 via its F-box domain, and these two proteins together with other components of the SCF complex participate in Fzo1 degradation (33). This means that Mdm30, in contrast to earlier views (68,71), can be part of an SCF complex and conforms to the F-box hypothesis. Mdm30 is not conserved in other fungi, but insight into how mitochondrial morphology is regulated by F-box proteins in S. cerevisiae is valuable, as in other fungi alternative F-box proteins or at least ubiquitination and protein turnover could also be involved in this intriguing process. Saf1: AN F-BOX PROTEIN INVOLVED IN ENTRY INTO QUIESCENCE Saf1 (SCF-associated factor 1) is an F-box protein required for the degradation of adenine deaminase 1 (Aah1) in S. cerevisiae. A microarray study showed clear AAH1 downregulation during the shift from proliferation to quiescence (47,48). Quiescence, also known as the stationary phase, is a state that yeast cells enter when nutrients are limiting. A SAF1 deletion mutant showed no downregulation of AAH1 expression, and stabilized protein levels of Aah1 were detected upon entry into quiescence. Degradation of Aah1 relies on Saf1, Skp1, and the proteasome and is dependent on the interaction between Saf1 and Skp1 via the F-box domain of Saf1. Saf1 interacts in a yeast two-hybrid experiment with both Aah1 and Skp1. Loss or mutation of the F-box domain of Saf1 abolished the interaction with Skp1 but not with Aah1, although the latter interaction was slightly weakened. Mutation of the lysine at position 329 of Aah1 did not affect the interaction with Saf1 but increased the stability of Aah1, suggesting that this lysine might be the ubiquitination site. Other targets of Saf1 are not yet known, but a candidate might be Ura7, a protein that is present at reduced levels in SAF1-overexpressing strains and is stabilized in saf1 strains (36). Curiously, although the known target(s) of Saf1 is conserved among fungi, Saf1 itself is not. In fungi other then budding yeast, degradation of these Saf1-targeted proteins might not be required upon quiescence or the proteins might be turned over in a different manner. Ufo1: AN F-BOX PROTEIN INVOLVED IN DNA DAMAGE RESPONSE In S. cerevisiae, Ufo1 (UV-F box-HO 1) targets the endonuclease Ho for proteasomal degradation and functions in genome stability and in response to DNA damage (88). After DNA damage, the MEC1/RAD9/CHK1 pathway phosphorylates Ho, stimulating its recognition and degradation. Ufo1 itself is also degraded via self-ubiquitination. This ubiquitination reaction is mediated by the ubiquitin interaction motifs (UIMs) in the C terminus of Ufo1 that bind during assembly in the SCF complex to Ddi1, a protein containing ubiquitinlike (UBL) and ubiquitin-associated (UBA) domains (78). Removal of the UIM domain in Ufo1 (Ufo1⌬uim) stabilizes the protein and inhibits the degradation of other proteins normally degraded by SCF complexes. It therefore seems that Ufo1⌬uim may prevent assembly of other F-box proteins into an SCF complex. Ufo1 also appears to regulate the degradation of Rad30, since that protein is stabilized in proteasome mutants and in cells lacking Skp1 or Ufo1. Direct interaction between Ufo1 and Rad30 has, however, not yet been demonstrated. Rad30 is a polymerase eta necessary for DNA replication near damaged DNA (184) and is removed again after replication because of its high error frequency. Recently, a study of the interactome of green fluorescent VOL. 8, 2009 MINIREVIEWS 685 protein (GFP)-labeled Ufo1 identified new proteins taking part in Ufo1 function (9a). The proteins interacting specifically with GFP-Ufo1 and bearing PEST degrons-potential phosphorylation sites that are often found in proteins targeted for degradation (167)-are Rbp2, Spt5, Fas2, and Gip2. Rpb2 is an RNA polymerase II (Pol II) subunit (39), and Spt5 is a protein that mediates both activation and inhibition of transcription elongation (125). Fas2 is a fatty acid synthetase component (142), and Gip2 is a putative regulatory subunit of the protein phosphatase Glc7p, involved in glycogen metabolism (195). Whether these proteins are targets of Ufo1 is not known. Ufo1 is not conserved in other fungi, suggesting that this type of regulation of the DNA damage response is restricted to (close relatives of) budding yeast. Fwd1: AN F-BOX PROTEIN CONTROLLING THE CIRCADIAN CLOCK In N. crassa, Fwd1 (F-box protein containing a WD40 repeat) was found to be involved in controlling the circadian clock via degradation of Frequency (Frq) (65,66). Circadian clocks regulate a wide variety of physiological and molecular processes during oscillation between day and night. Besides being regulated by Frq, the circadian clock in Neurospora is further regulated by light and controlled by the transcription factors Wc-1 and Wc-2 (44). Frq inhibits its own transcription by inhibiting Wc-1 and Wc-2 (1, 2). When Frq is hyperphosphorylated by CK1 and CKII (63), it is recognized by Fwd1 and degraded. This releases Wc-1 and Wc-2 activity, leading to the production of new Frq. The function of Fwd1 in the SCF complex is regulated by the COP9 signalosome CSN. Disruption of a subunit of CSN impaired the degradation of Frq, probably because reduced amounts of Fwd1 were present in the csn mutant: the half-life of Fwd1 is reduced from 6 to 9 h to 45 min, and other components of the SCF complex proved to be unstable. In a ⌬csn-2 mutant, SCF is constitutively neddylated, which enhances the degradation rate of Fwd1. This degradation is probably independent of binding of Frq to Fwd1, resulting in reduced amounts of Fwd1 and impaired degradation of Frq1 (64). The CSN-2 deletion mutant also exhibits slow growth and reduced production of aerial hyphae compared to levels for the wildtype CSN-2, suggesting that other F-box proteins might also be affected. N. crassa is the main model organism for the investigation of circadian rhythms in fungi, and only for this fungus has Fwd1 been studied intensively. Nevertheless, this protein, as well as circadian rhythms, is present in other filamentous fungi (14,114), as are homologs of N. crassa clock components, like Frq, WC-1, and WC-2 (132). However, in A. nidulans no homolog of FRQ is present (57), even though an FWD1 homolog is present (our observations). This suggests that at least in some fungi, Fwd1 has other targets. Interestingly, in plants, involvement in rhythmic processes has also been demonstrated for several F-box proteins, like ZEITLUPE, FKF1, and AFR (61,151,187), which play a role in photocontrol of the circadian period, the circadian clock, and phytochrome A-mediated light signaling, respectively. Met30: AN F-BOX PROTEIN INVOLVED IN SULFUR METABOLISM As described above, the F-box hypothesis states that the targets of F-box protein are degraded after ubiquitination, but ubiquitination of the Grr1 target Gis4 does not lead to degradation. Met30 is another example of an F-box protein whose target is not necessarily degraded. Met30 is an F-box protein from S. cerevisiae that can recruit its target to the SCF for degradation, but it can also activate its target by ubiquitination when the target is assembled into a transcription activation complex. A major Met30 target is Met4, a transcriptional activator of the sulfate assimilation pathway controlling MET and SAM genes for uptake and biosynthesis of sulfur-containing compounds (194). Met4 is also required for cadmium tolerance by activating the expression of genes involved in glutathione biosynthesis (10,21,212). The transcription of Met30 is regulated through a feedback loop, as Met4 controls the activation of Met30 (171). The way Met4 is regulated by Met30 has been under discussion for several years (23,24,83,140,141). A picture in which Met30 regulates Met4 in multiple ways has emerged (30). First, Met30 activates Met4 when low levels of methionine are available, resulting in the expression of MET and SAM genes. When, through MET and SAM activation, higher intracellular levels of methionine are obtained, the intracellular concentration of cysteine also increases through the S-adenosyl-methionine and cysteine biosynthesis pathways. High levels of cysteine again lead to the inactivation of Met4 by Met30. The alternative activation and inactivation of Met4 by Met30 are explained in a two-step model. In the inactive state, dimerization of Met4 causes low interaction with cofactors, leading to intermediate expression of MET and SAM genes. Met30 relieves this dimerization through degradation of one of the dimerized Met4 proteins, leaving the other Met4 subunit free to assemble into an activation complex, thus triggering expression of the MET and SAM genes (step 1). When higher levels of sulfur-containing amino acids are present, Met30 binds to Met4 in the assembled promoter complex, leading to Met4 ubiquitination (step 2); this ubiquitinated promoter complex represses transcription of the MET and SAM genes. Eventually, Met4 is degraded and the complex disassembles, making space for new complexes to form on the promoter when levels of sulfurcontaining amino acids are low again. In S. pombe, the Met30 homolog Pof1 is an essential protein that targets the Met4 homolog Zip1 (a basic leucine zipper) (62). Like in S. cerevisiae, Zip1 mediates cadmium tolerance by activation of cadmium response genes. Regulation of Met4 by Met30 and regulation of Zip1 by Pof1 show similar patterns. However, one difference between the two systems is that Zip1 is required for the biosynthesis of sulfur-containing amino acids only under low levels of sulfur and is not required during normal growth conditions, as is Met4 (13). In N. crassa, the Met30 homolog Scon2 (sulfur controller) is also required for sulfur uptake and assimilation (110,111). Cys3, the Met4 ortholog of N. crassa, is degraded via Scon2 and regulates the entire set of sulfur uptake and assimilation genes. Interaction between Scon2 and Scon3 (N. crassa Skp1) was observed using a yeast two-hybrid screen and coimmunoprecipitation and was 686 MINIREVIEWS EUKARYOT. CELL dependent on the F-box motif in Scon2 (183). Cys3 activates not only sulfur utilization genes but also the transcription of CYS3 itself and of the SCON2 gene. Therefore, when Scon2 targets Cys3 for proteolysis, its own activation is also reduced, ensuring the possibility of rapidly activating Cys3 again. Mutational analysis of Scon2 showed that the F-box domain plays an important role in the regulation of Cys3. Eleven out of 14 mutations in the F-box domain gave rise to a constitutively repressed phenotype corresponding to the ⌬cys3 phenotype. This is at first sight surprising, since mutation of the F box is expected to impair Skp1 binding (although loss of binding to Skp1 was not verified) and thereby decrease the ability to degrade Cys3. This would in turn be expected to lead to a constitutive activation phenotype of CYS3. It is possible that Scon2 is required not only for the degradation of Cys3 but also for its activation, as demonstrated for Met30 and Met4 in budding yeast. The role of the Met30 homolog SconB (148) in A. nidulans has proven to be similar to that of Scon2 in N. crassa, including binding to Skp1, called SconC in A. nidulans (164). Still, differences have also been found: SCONB is not transcriptionally activated by the Cys3 counterpart of A. nidulans, MetR (149), and although MetR and Cys3 both recognize the same DNA sequences, full-length CYS3 cannot fully complement the ⌬metR phenotype. In addition to its role in sulfur metabolism, Met30 is essential for cell cycle progression. It regulates multiple aspects of the cell cycle, including the expression of cyclins required for G 1 -phase progression and the accumulation of proteins involved in replication and progression through the M phase (161,190). A target of Met30 was believed to be Swe1 (34), a Wee1 family kinase that inhibits Cdc28 by phosphorylation, since high activity of Swe1 and nonubiquitinated forms of Swe1 were found in ⌬met30 cells. An in vivo interaction between Met30 and Swe1 was also demonstrated (34). A later study concluded, however, that Met30 is not responsible for degradation of Swe1 but that degradation is a result of the interaction between Swe1 and Hsl7 (84,138). This interaction with Hsl7 mediates the translocation of Swe1 out of the nucleus to the mother bud neck, where its degradation takes place in an unknown manner. The involvement of Met30 in Swe1 degradation is therefore disputable but cannot be ruled out entirely. Regardless of the exact mechanisms, it is now clear that the activation and degradation of proteins involved in the cell cycle are under the control of multiple F-box proteins (Cdc4, Grr1, and Met30). These F-box proteins either target cell cycle proteins directly or regulate their levels indirectly. Finally, recently a Met30 homolog, Lim1, was found in H. jecorina through a yeast one-hybrid screen, and it was demonstrated that Lim1 can bind promoter sequences of the cellobiohydrolase gene CBH2 (58). Clarification of the role in transcription of Lim1 and the involvement of possible targets awaits further investigation. F-BOX PROTEINS WITHOUT IDENTIFIED TARGETS F-box proteins from which targets are known to be ubiquitinated through binding to Skp1 and assembly into an SCF complex are described above. For several other fungal F-box proteins, binding to Skp1 does not appear to be required for function or target inactivation. The functions of these proteins, then, seem to fall outside the F-box hypothesis, as probably no targets are recruited to an SCF complex. The F-box domain in some of these proteins could interact with proteins other than Skp1. Alternatively, interaction with Skp1 is required only for self-ubiquitination of the F-box protein to control protein levels. Fbh1: A DNA REPAIR F-BOX PROTEIN Fbh1 (F-box DNA helicase) is an F-box protein from S. pombe involved in the regulation of recombination levels and DNA repair (144,156,175). Like its human homolog, Fbh1 contains a helicase domain to unwind DNA. Human Fbh1 functions downstream of the recombinase enzyme Rhp51 (the ortholog of S. cerevisiae Rad51). Although Fbh1 binds Skp1, it appears that the F box is not necessary for Fbh1 to promote DNA repair, since two mutations in the F-box domain did not alter growth or genotoxin resistance (120). A mutation in the helicase domain, however, did affect the DNA repair function. The human homolog assembles into an SCF complex, but its targets, if any, are also still unknown. Possibly, Skp1 binding mediates only self-ubiquitination of Fbh1. Remarkably, Fbh1 is the only fungal protein that contains an F box combined with a helicase domain, and it is found only in fission yeast. Frp1: AN F-BOX PROTEIN REQUIRED FOR ROOT INVASION In F. oxysporum f. sp. lycopersici, a vascular wilt pathogen of tomato, Frp1 (F-box protein required for pathogenicity 1) was found using an insertional mutagenesis screen for pathogenicity genes (46). Frp1 is required for assimilation of various (nonsugar) carbon sources as well as induction of genes for cell wall-degrading enzymes, which would explain the deficient plant root colonization and penetration by the ⌬frp1 mutant (W. Jonkers et al., in press). Frp1 binds to Skp1 in yeast two-hybrid and pull-down assays, but mutations in the F-box domain of Frp1 that impair binding to Skp1 do not affect the phenotype, suggesting that the main function of Frp1 does not depend on ubiquitination of targets (W. Jonkers and M. Rep, unpublished results). Because FRP1 orthologs are present in other plant-pathogenic fungi, it will be interesting to study their role in pathogenicity in these fungi. The deletion of the FRP1 ortholog in F. graminearum has been reported previously, but an initial characterization revealed no obvious differences from the wild type (60). Pof14: AN F-BOX PROTEIN THAT INHIBITS ERGOSTEROL SYNTHESIS Pof14 is an F-box protein in S. pombe required for survival upon hydrogen peroxide stress (193). In response to such stress, Pof14 binds and inhibits Erg9, a squalene synthase involved in ergosterol synthesis. Ergosterol enhances the permeability of the membrane and thereby the uptake of hydrogen peroxide. Pof14 and Erg9 bind to each other in a membranebound complex, as was demonstrated by tagging both proteins with fluorescent tags. Binding of Pof14 to Erg9 inhibits the activity of Erg9, and overexpression of POF14 leads to decreased levels of squalene synthase activity and ergosterol. VOL. 8, 2009 MINIREVIEWS Transcription of POF14 is induced after treatment with hydrogen peroxide, and deletion of POF14 decreases viability after hydrogen peroxide treatment (193). Decreased viability was not observed upon deletion of the F-box domain, suggesting that binding of Pof14 to Skp1 is not required for peroxide resistance. However, binding of Pof14 to Skp1 may promote the degradation of Pof14 itself. In wild-type cells, Pof14 has a half-life of 20 to 40 min, but in temperature-sensitive mutants of Skp1, Pof14 is stable for at least 60 min (193). Whether this stabilization is due to defective assembly of Skp1 and Pof14 into an SCF complex for self-ubiquitination is not known. NON-SCF F-BOX PROTEINS Ctf13 and Rcy1 are two F-box proteins that were found to bind Skp1 but nevertheless function independently of an SCF complex and probably also do not have targets to be ubiquitinated; therefore, they are unlikely to be involved in protein inactivation. The binding of these proteins to Skp1 may be evolutionarily conserved but may have acquired an alternative function. Ctf13: A KINETOCHORE ASSEMBLY F-BOX PROTEIN In S. cerevisiae, Ctf13 (chromosome transmission fidelity 13) is part of the CBF3 complex, which in turn is part of the centromere-bound scaffold, where the microtubule binding components of kinetochores assemble. The CBF3 complex consists of four components: Skp1, Ctf13, p64 (encoded by CEP3 and containing a zinc finger centromere binding domain), and p110 (a protein complex encoded by three genes, Cbf2, Ndc10, and Ctf14) (105,214). The binding of Ctf13 to Skp1 requires the phosphorylation of Ctf13 (87,173) and the interaction with Sgt1 and Hsp90 (8,188) for the assembly and function of the kinetochore complex. When mutations that prevent binding of Ctf13 to Skp1 were introduced, severely impaired cell growth was observed. Interestingly, Ctf13 is targeted by another F-box protein, Cdc4, for degradation, which is in accordance with Ctf13 not being part of an SCF complex itself. Binding of Ctf13 to p64 rescues Ctf13 from degradation. Probably only free Ctf13 is degraded via Cdc4, and Ctf13 degradation might be required to tightly regulate kinetochore assembly. Rcy1: AN F-BOX PROTEIN INVOLVED IN VESICLE TRAFFICKING Rcy1 (recycling 1) was found in a genetic screen for S. cerevisiae mutants defective in membrane trafficking through the endocytic pathway. Deletion of RCY1 results in an arrest of the endocytic pathway and leads to accumulation of enlarged compartments close to areas of cell expansion (202). For Rcy1 to function, it needs to bind Skp1, but other components of the SCF are not required for recycling or for degradation of Rcy1 itself. Rcy1 contains two SEC10 domains and a CAAX box, implicated in mediating interaction with membranes and also needed for recycling. Rcy1 is required for the recycling of the v-SNARE Snc1p, a membrane protein that fuses exocytic vesicles with the plasma membrane (56). During vegetative growth, Snc1p is localized at the plasma membrane and continually recycles through the Golgi body (121,143). Rcy1 binds via its C-terminal domain to two GTPases, proteins that regulate vesicle transport during exo-and endocytosis and are required for Golgi body function in yeast (16). Rcy1 interacts specifically with the active forms of the two GTPases, and together they colocalize to the Golgi body and endosomes. A second recycled protein by Rcy1 is Kex2, a calcium-dependent serine protease involved in preprotein processing. Kex2p is a membrane-bound protein cycling between trans-Golgi vesicles and late endosomal compartments (54,203). These studies suggest that the involvement of Rcy1 in vesicle transport is not related to protein degradation. Indeed, ubiquitination of the interacting proteins seems unlikely, since assembly into an SCF complex is not required for Rcy1 function. The F box of Rcy1 is required for binding to Skp1, but the biochemical function of this small complex during vesicle trafficking remains unclear. Perhaps surprisingly, Rcy1 has been found in an SCF complex (113), but it remains unknown whether this is a functional complex. The S. pombe homolog of Rcy1, Pof6, also forms a complex with Skp1 and does not function in an SCF complex (69). Pof6 is required for septum processing and sporulation. Deletion of POF6 results in the formation of a thick septum and the absence of viable spores, which differs from the deletion phenotype of Rcy1, which is not lethal. Recently, a specific Pof6 interactor, Sip1, was found using TAP (tandem affinity purification) purification and MudPIT (multidimensional protein identification technology) analysis (82). It was shown that Pof6 and Sip1 form a non-SCF complex with Skp1 and that both proteins require interaction with Skp1 for stability. Sip1 is a widely conserved protein in eukaryotes and consists of HEAT (Huntington, elongation factor 3, the regulatory A subunit or protein phosphatase 2fA and Tor1) repeats required for interaction with other proteins. Like Pof6, Sip1 is essential and plays a role in endocytosis and cytokinesis. The budding yeast ortholog of Sip1, Laa1, has not yet been identified as an interactor of Rcy1 but might also be part of the Rcy1-Skp1 complex, as it too mediates protein transport between the trans-Golgi network and endosomes (50). Clearly, the role of Rcy1 is conserved between budding yeast and fission yeast. In fact, Rcy1 seems to play a fundamental role in vesicle trafficking in fungi, since it is conserved throughout the fungal kingdom (BLAST searches and our observations). NON-Skp1 BINDING F-BOX PROTEINS Some proteins with an F-box domain do not bind Skp1 but instead bind another E3 ligase subunit. For other F-box proteins, binding to Skp1 could not be demonstrated or has not been investigated. The F-box domain in some of the latter proteins may also mediate assembly into different complexes. Ela1 (elongin A 1) and Elc1 (a Skp1 homolog) in S. cerevisiae were identified as the homologs of mammalian elongin complex components (106). Also, in yeast, Ela1 and Elc1 are present in the same complex. Probably, Ela1 does not act in an SCF complex, since it binds Elc1 instead of Skp1 and since Elc1 does not bind Cul1. Ela1 and Elc1 likely bind to Cul3 This new combinatory complex was not reported earlier, and it shows the possibility that subunits from different complexes can interchange to form new complexes, potentially broadening the arsenal of ubiquitin ligase superfamilies. Ela1 and Cul3 were found to be required for cell survival after treatment with UV or the mutagen 4-nitroquinoline 1-oxide. Both proteins are also required for degradation and polyubiquitination of subunit Rpb1 of RNA Pol II. Pol II is normally removed from damaged DNA to make room for the nuclear excision repair machinery to assemble at that site and repair damaged DNA strands (169). Mfb1 (mitochondrion-associated F-box protein) in S. cerevisiae controls membrane fusion dynamics of mitochondria, like Mdm30, described above. Deletion of MFB1 results in abnormal mitochondrial morphologies, including short tubules, aggregates, and fragments in different combinations (103). Binding to Tom71 localizes Mfb1 to mitochondria, and binding to Tom70 ensures stable association with these organelles (104). The paralogous TPR (tetratricopeptide repeat) proteins Tom70 and Tom71 are both associated with mitochondrial protein import (22,176). Loss of MDM30 also results in short tubules, aggregates, and fragments but in a different distribution than in ⌬mfb1 mutants (53). A double knockout of MFB1 and MDM30 results in a decreased number of short tubules but more aggregates and fragments. On rich dextrose and glycerol plates, an mfb1 mutant grows like the wild type, an mdm30 mutant grows more slowly, and a double mutant displays a severe growth problem, probably due to mitochondrial DNA instability (45). Possible targets of Mfb1 are proteins involved in mitochondrial morphogenesis, but for none of the candidate proteins were larger amounts seen in the ⌬mfb1 mutant than in the wild-type MFB1. This observation and the lack of demonstration that Mfb1 binds Skp1 suggest that Mfb1 may not function as part of an SCF complex. Amn1: A MITOSIS EXIT STATE F-BOX PROTEIN Amn1 (antagonist of mitotic exit network 1) from S. cerevisiae is listed as one of the 21 budding yeast F-box proteins in an earlier review (205). The protein shares homology with another F-box/LRR protein, Pof2 of fission yeast, with little conservation of the F-box domain, in part because of an interspersed region of 56 amino acids in the motif. Although a genetic interaction has been found, a physical interaction between Amn1 and Skp1 could not be demonstrated (in insect cells), perhaps because of the interspersed region in the F-box domain. Amn1 itself might be targeted for SCF-mediated proteolysis, since stabilized forms of Amn1 were found in cul1 and skp1 mutant strains. AMN1 expression peaks at the M/G 1 phase, and Amn1 is normally degraded when cells enter the S phase, showing an accumulation pattern similar to that of the Cdc4 target Sic1. Amn1 is required to turn off the mitotic exit pathway after it is completed, and it inhibits the function of Tem1, a small GTPase that activates the mitotic exit network, which causes spindle breakdown, degradation of mitotic cyclins, cytokinesis, and cell separation (11). It was shown that Amn1 binds to Tem1 and inhibits its function by obstructing the binding of Tem1 to Cdc15. This ensures that the cell can exit from mitosis and enter the G 1 phase (199). Tem1 levels are elevated in a ⌬amn1 mutant (153), suggesting that Amn1 may regulate Tem1 levels. Since Amn1 apparently does not bind Skp1, this regulation may not involve ubiquitination. OTHER FUNGAL F-BOX PROTEINS Broad, genomics-based interaction and localization studies have provided some information on F-box proteins that have not been investigated individually ( Table 2). Most of these proteins bind Skp1 and can assemble into an SCF complex (113,181), and some also interact with ribosomes (Ynl311c) (51) or other proteins, like Sgt1 (Ynl311c and Ydr306c) (43). Sgt1 binds to Skp1 and other SCF components (96) and acts as a "client adaptor," linking the chaperone Hsp90 to SCF and CBF3 complexes containing Skp1 (29). For one F-box protein (Ymr258c), it was determined that it localizes to the cytoplasm and nucleus using a GFP fusion (74), and for another (Ylr224w), it was demonstrated that it is readily monoubiquitinated in vitro by SCF-Ubc4 complexes (113). Finally, two other F-box proteins, Cos111 and Pof10, cannot be classified into one of the above-mentioned categories but have been studied and can be related to specific cellular process. In S. cerevisiae, the F-box protein Cos111 was identified from a mutant that showed increased sensitivity to ciclopirox olamine, an antifungal agent that chelates iron and other ions and thereby inhibits metal-dependent enzymes (119). The cos111 mutant is sensitive to ciclopirox olamine at 36°C and is also sensitive to hydroxyurea. In addition, the cos111 deletion (170). It is unknown whether Cos111 binds Skp1 and regulates protein degradation via an SCF complex. Pof10 from S. pombe is an F-box/WD40 protein that binds Skp1 via its F-box domain (76). Deletion of POF10 does not result in an obvious phenotype, which is remarkable since POF10 is conserved between fission yeast and filamentous fungi (BLAST search and our observations). On the other hand, overexpression of POF10 results in lethality, probably due to sequestration of Skp1, thereby preventing the formation of other SCF complexes. Viability was restored by concomitant overexpression of SKP1, presumably by making more Skp1 available for formation of other SCF complexes. Binding to Skp1 may not lead to self-ubiquitination, because Pof10 is highly stable in contrast to other F-box proteins. Although Pof10 bears a protein-protein interaction domain (a WD40 motif), targets of Pof10 have not been identified. CONCLUDING REMARKS Fungal F-box proteins take part in highly diverse cellular processes, but most share the same molecular function: removal or inactivation of specific proteins. Loss of a fungal F-box protein often results in a pleiotropic phenotype, especially when the F-box protein has multiple targets. For Cdc4, which has 10 known targets, a null mutation is lethal. Conversely, when a gene deletion shows no or little effect, the F-box protein may target only one or a few proteins. For example, the original deletion mutant of COS111 did not show any phenotype, but it was later demonstrated that COS111 may have a function in tolerance to an antifungal agent. Targets of F-box proteins can vary from transcription factors, enzymes, DNA repair proteins, structural proteins, and cyclins to inhibitors and/or activators of various other processes. These targets can operate at an intermediate level of a signaling pathway, for example, Rcn1 and Sic1, which are degraded via Cdc4, and Msk1 and Ime2, which are degraded via Grr1. Other targets function at the end of a pathway, examples of which are the transcription factors Tec1 and Gcn1, degraded via Cdc4, and Frq, degraded via Fwd1. Targets of F-box proteins are recognized mostly when phosphorylated. Such phosphorylation can be performed by many different protein kinases, like CDKs, mitogen-activated protein kinases, Pho kinases, and casein kinases, depending on the pathway or process in which the target protein functions. Different forms of phosphorylation can be required for recognition. For instance, a requirement for hyperphosphorylation of a target causes a threshold before a target is being degraded and ensures that multiple phosphorylation steps control degradation, like for Cdc4-mediated degradation of Sic1 and Fwd1-mediated degradation of Frq. An exceptional case of an unphosphorylated target is Msk1, which is degraded by Grr1 when it is unbound to either Rtg2 or Bmh1. Apparently, the site on Msk1 recognized by Grr1 is masked by these interacting proteins. In addition to these well-studied F-box proteins, several other F-box proteins interact with proteins without targeting them for disposal, examples being Ctf13, Rcy1, and Pof14. For still other F-box proteins, no targets or interacting proteins have been found, and these are often referred to as "orphan F-box proteins." It might be that the targets are yet to be found or that no targets exist for these orphan F-box proteins. Especially for those of which mutation of the F-box domain does not (greatly) affect function (like Frp1 and Fbh1), the F-box domains may serve as degradation motifs required solely for self-ubiquitination. Instead of targeting other proteins, they may, for instance, function as DNA binding proteins or perform an enzymatic reaction. Another variation on the F-box hypothesis is seen for Ela1, an F-box protein that does recruit targets for degradation but assembles in a complex different from the SCF complex. The levels of free and SCF-bound F-box proteins in the fungal cell are regulated by recycling of Skp1-F-box complexes within the SCF core via Nedd8 and CAND1 and by self-ubiquitination. It seems that in unicellular fungi, like budding and fission yeasts, recycling is less important and F-box proteins are regulated mostly by autoubiquitination, as shown for several budding yeast F-box proteins. The difference in regulation could be related to the small number of F-box proteins in these two yeasts (21 and 16, respectively) relative to the number in filamentous fungi, which can be 100 or more (BLAST searches and our observations). F-box proteins are probably also regulated by recycling and autoubiquitination to remove "free" F-box proteins (i.e., unbound to a target) from SCF complexes, so that these complexes become available for other F-box proteins. Such a scenario was supported by overexpression of POF10, probably leading to constitutive occupation of Skp1, which results in lethality (76), and for the Ufo1⌬uim mutant lacking the UIMs. Normally, the UIMs are required for self-ubiquitination of Ufo1; a mutant lacking the UIMs cannot be ubiquitinated anymore and therefore remains in the SCF complex (78). In a variation on self-ubiquitination, Ctf13 is targeted by another F-box protein, Cdc4, when unassembled into a CBF complex. Besides being regulated on the protein level by ubiquitination and recycling, F-box proteins can be regulated at the transcriptional level. An example is Met30, which creates a negative-feedback loop by degrading the transcription factor Met4, inactivating its own transcription. Another potential mechanism of regulation is localization. Some F-box proteins function specifically at certain sites in the cell or at certain regions on chromosomal DNA. To be transported to these sites, interacting partners can play an important role, as demonstrated for Mfb1 and Dia2/Pof3. In fungi, regulation of the activity of F-box proteins themselves is usually not an integral part of a signal transduction pathway, in contrast to some cases with plants, where it has been demonstrated that F-box proteins could be activated by direct binding to a small molecule. These F-box proteins act as receptors, with direct hormone binding triggering their activation (reviewed in reference 214a). Such a mechanism remains a possibility also with fungi, for instance, for Met30, which is activated when high levels of methionine, S-adenosyl-methionine, or cysteine are present. Binding studies of these sulfur- 690 MINIREVIEWS EUKARYOT. CELL containing amino acids or derivatives to Met30 could confirm this possibility. Perhaps less sophisticated, many fungal F-box proteins appear to function simply as garbage collectors, removing waste proteins that have been marked for degradation. However, several variations on this theme have emerged. For instance, Met30 regulates the transcription factor Met4 in complex ways and does not simply follow the standard F-box hypothesis. Further in-depth investigations of F-box proteins and their potential targets or other functions may reveal more such variations. Most F-box proteins discussed in this review are from S. cerevisiae, providing a fairly comprehensive overview of the variety of functions that F-box proteins perform in a eukaryotic cell. The additional results obtained with orthologs and other F-box proteins from fission yeast and filamentous fungi give an impression of the degree of functional conservation of F-box proteins between fungal species. For example, functional conservation of Grr1 is, not unexpectedly, less when species are more distantly related-in contrast to the GRR1 ortholog of C. albicans, the orthologs from two filamentous fungi could not fully complement the ⌬grr1 mutant of yeast. Since Skp1 is highly conserved between species, this is probably due to differences in target recognition. Evolution of an F-box protein is constricted by the requirement to recognize diverse targets. When an F-box protein encounters orthologs of its natural target in another fungus or a "novel" target (i.e., not present in its natural environment), recognition might be less efficient, despite overall sequence conservation in the target recognition domain of the F-box protein. Conservation of F-box protein function between different fungal species can also be assessed by the conservation of targets and the pathways leading to the phosphorylation of these targets. Fission yeast and C. albicans harbor orthologs of targets of budding yeast Cdc4 and Grr1, but these have not yet been found in other fungal species (Table 1). For Met4, a target of Met30, homologs are present in fission yeast and in the filamentous fungi A. nidulans and N. crassa. Apparently, the Met30-Met4 interaction system has remained relatively stable during fungal evolution. Searches of fungal genome sequences allow an estimation of the numbers of genes encoding F-box proteins in different fungal species. A. nidulans, for example, contains about 50 genes encoding F-box proteins, and in different Fusarium species 60 to 95 genes encoding F-box proteins are present (Jonkers and Rep, unpublished). In comparing these numbers to the smaller numbers in yeast (21 in budding yeast and 16 in fission yeast), it becomes clear that the potential variation of processes regulated by F-box proteins in filamentous fungi is much more extensive than that for yeasts. Examples of this are regulation of the circadian clock with N. crassa and plant infection with F. oxysporum. With fungi, it is relatively easy to investigate F-box proteins, due to the availability of knockout strains and the accessibility to molecular manipulations. Sophisticated screens for target identification and deletion studies of all genes encoding F-box proteins present in a fungal genome, combined with detailed investigations of protein-protein interactions and posttranslational modifications, will promote a deeper and broader insight into the diverse functions of F-box proteins in eukaryotic cells. Among the general lessons already learned from investiga-tion of the fungal F-box arsenal are that these proteins function in a very broad array of cellular functions and can target many different proteins for degradation. Furthermore, clearly not all F-box proteins comply with the F-box hypothesis, and the regulation of at least some of these proteins is more complex than expected. Concerning the conserved F-box proteins found in budding yeast and filamentous fungi, we learned that they can have both conserved and diversified targets and that accumulation of conserved targets in deletion mutants of F-box proteins can sometimes result in different phenotypes. Fungi remain a rich source for the discovery and understanding of a great variety of intricate cellular processes with which F-box proteins are involved.
2018-04-03T03:39:47.570Z
2009-03-13T00:00:00.000
{ "year": 2009, "sha1": "90cef3744bbbd8e06fc440ec5fc491ae3877b173", "oa_license": null, "oa_url": "https://ec.asm.org/content/8/5/677.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "3e4b0aeaf2cd0b44e1bbeaf125b42628515b396d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Biology", "Economics" ] }
218534746
pes2o/s2orc
v3-fos-license
Early Childhood Caries and Its Associated Factors among 9- to 18-Month Old Exclusively Breastfed Children in Thailand: A Cross-Sectional Study Objective: The objective of this study was to investigate the early childhood caries (ECC) status and its risk factors in 9- to 18-month-old exclusively breastfed children in Thailand. Methods: Generally healthy 9- to 18-month-old children who had been exclusively breastfed were recruited. Information on children’s oral hygiene practices and breastfeeding behaviors was collected through parental interviews using a questionnaire. Children’s oral health status was assessed following the WHO caries diagnostic criteria, modified to record the noncavitated lesions. Multivariate logistic regression analysis was adopted to investigate its association with feeding and oral hygiene practices. Results: In total, 513 mother and child dyads (47% boys) were recruited. The prevalence of ECC was 42.5%. The mean (SD) d1mft and d1mfs scores (d1 included noncavitated and cavitated carious teeth/tooth surfaces) were 1.1 (1.4) and 1.3 (2.0), respectively. Multivariate logistic regression analysis revealed that older children with higher plaque scores (OR = 75.60; 95% CI: 40.19–142.20) who were breastfed to sleep (OR = 2.85; 95% CI: 1.48–5.49) and never had their teeth cleaned (OR = 8.51; 95% CI: 1.53–47.14), had a significantly higher chance of having ECC (p < 0.05). Conclusion: Prevalence of ECC is high among exclusively breastfed children aged 9–18 months in Thailand. ECC prevalence is significantly associated with the age of children, the level of dental plaque, breastfeeding to sleep, and oral cleaning. Among all factors, the level of dental plaque is the most significant factor associated with ECC among breastfed children. Introduction Early childhood caries (ECC) is a global oral health concern [1]. It remains prevalent in many countries and highly prevalent in developing countries worldwide [2]. In Southeast Asia, a systematic review showed that ECC prevalence was very high among children aged five years old [3]. In Thailand, the national oral health survey reported similar findings, indicating that the disease was prevalent (79%) among preschool children [3]. In many places, ECC is mostly left unrestored or untreated, possibly leading to dental infection and toothache. This eventually affects the quality of life and well-being of young children [4]. Following the American Academy of Pediatric Dentistry, ECC is defined as "the presence of one or more decayed (noncavitated or cavitated lesions), missing (due to caries) or filled tooth surfaces in any primary tooth in a child under the age of six" [5]. The etiology of ECC is complex. Fluoride exposure, sugar consumption, and infant feeding practices are significantly associated with ECC [6]. Breastfeeding is a vital and natural behavior for the growth and development of infants and young children. The American Academy of Pediatrics recommends that infants should be exclusively breastfed for the first six months, with the continuity of breastfeeding alongside complementary diets for another year or longer [7]. Human breast milk feeding is defined as the practice of an infant only being breastfed or fed human breast milk from a bottle. Breastfeeding is a crucial strategy to reduce infant mortality because it provides essential nutrients for growth and development and helps boost the infant's immune system [8]. Several studies have shown that breastfeeding reduces the risk of numerous gastrointestinal and respiratory tract infections, atopic eczema, and other allergic disorders [9]. At present, breastfeeding has been encouraged and promoted in many countries worldwide [10]. In Thailand, the Ministry of Public Health is aiming to increase the rate of exclusive breastfeeding up to at least 50% by 2025 [11]. A Cochrane review concluded that children with exclusively breastfeeding for six months experienced less morbidity but no risk reduction in dental caries was reported [12]. Following the WHO recommendation, exclusive breastfeeding should continue until two years of age or beyond. ECC prevention should align with international initiatives and be integrated into contemporary primary care systems [13]. However, the association between breastfeeding and ECC is currently inconsistent [6,14]. Studies showed that prolonged breastfeeding was associated with ECC [14,15]. Controversially, another study reported no relationship between prolonged breastfeeding and dental caries in young children [16]. Recently, a systematic review concluded that breastfeeding until two years of age did not increase caries risk [6]. The cause of ECC is known to be multifactorial, including biological, behavioral, social, and environmental circumstances [17], and improper feeding patterns may pose an increased risk of developing ECC [18]. Most of the studies focused on the duration of breastfeeding and ECC. To date, there is limited information regarding the level of dental plaque and other modifiable risk factors in relation to ECC among exclusively breastfed children with high caries risk. We hypothesized that breastfeeding pattern and oral health-related behaviors may associate with ECC development and severity in this population. Thus, this study aimed to investigate ECC prevalence, caries experience, and the intensity of ECC and its risk factors among 9-to 18-month-old exclusively breastfed children. The results of the present study are beneficial for health care practitioners to provide preventive guidance for parents and caregivers of exclusively breastfed children. Materials and Methods The Institutional Review Board of Chulalongkorn University (IRB no.: HREC-DCU 2011-004) approved the present study. Written consent was obtained from the parent of each study child. The current study was implemented in full accordance with the World Medical Association Declaration of Helsinki. The study was conducted at Queen Sirikit National Institute of Child Health, Bangkok, Thailand, from October 2011 to September 2012. Eligibility criteria were 9-to 18-month-old children who were exclusively breastfed from birth to 6 months old and full breastfeeding (breast milk with or without water, other liquids, or food but not formula) until the day of examination and whose mothers were able to write and read in the Thai language. All children who had attended the Well Baby Program in the pediatric clinic for routine vaccination were invited. Exclusion criteria were children who had a major systemic illness. The participating children were examined in the consultation room at the pediatric clinic. Regarding the sample size estimation, the ECC prevalence of Thai children aged 12 and 18 months was 22.8% and 66.8% at 18 months, respectively [19]. In the present study, we recruited children aged 9-18 months old, thus, the overall anticipated prevalence would be around 50%. The desired precision of estimation was set as 5%. With the confidence interval set as 95% (alpha = 0.05), 386 children were needed in this study. With the anticipated response rate at 70%, at least 550 dyads needed to be invited. Mothers of the study children were interviewed using a structured questionnaire regarding the child's demographic background, breastfeeding behaviors, oral health-related behaviors, dietary practices, and medication intake by an independent interviewer. Before conducting a study, an examiner (P.C.) was trained and calibrated with a specialist in pediatric dentistry (C.T.). The result of the Kappa statistics during the calibration process was 0.9. The study children were examined by a calibrated and trained dentist (P.C.) who was not aware of the mothers' responses in the interviews. The dental examination was performed in a knee-to-knee position using a dental probe and a dental mirror. Caries was also diagnosed following the WHO diagnostic criteria [20], modified to record the noncavitated lesions or initial lesions according to Warren and colleagues [21]. In the present study, the number of decayed (noncavitated or cavitated), missing due to caries, and filled teeth (d 1 mft) and tooth surfaces (d 1 mfs) were calculated for each participating child. Regarding the assessment of dental plaque, four anterior maxillary teeth were observed using the Greene and Vermillion index [22]. A trained examiner used a blunt probe to horizontally scrape the tooth surface at the incisal third, middle third, and cervical third of the anterior maxillary teeth. Then, the amount of plaque on the explorer and the location of plaque accumulation was visually observed. The criteria used in the study were as follows: level 3 = plaque accumulation up to the incisal third; level 2 = plaque accumulation up to the middle third; level 1 = plaque accumulation on the cervical third; level 0 = no presence of dental plaque. The scores of each upper maxillary anterior tooth were summed up and then divided by the total number of teeth examined. We analyzed the data using the software SPSS 24.0 for Windows (IBM Corp., Armonk, NY, USA). The prevalence of ECC, d 1 mft, d 1 mfs, and intensity of ECC (I-ECC) were calculated. The I-ECC was calculated by dividing the d 1 mft score by the number of erupted teeth. The Shapiro-Wilk test was adopted to test the normality of d 1 mfs score and I-ECC. Because the data were not normally distributed (Shapiro-Wilk test, p < 0.05), the Mann-Whitney U test was used to study the distribution of d 1 mfs scores and I-ECC according to the children's age. Logistic regression was adopted to analyze the relationship of each independent variable and the dependent variable or presence of ECC (yes or no). Negative binomial regression was used to analyze the association between variables and d 1 mfs scores. Poisson regression was used to identify factors that correlated with I-ECC. All potential variables (p < 0.05) in the bivariate analysis were inserted as covariates in the multivariate regression model. The backward stepwise procedure was adopted to remove variables that were insignificant (p > 0.05) from the model. The final regression model comprised the statistically significant variables. The level of statistical significance in the present study was set at 0.05 for all tests. Results Out of the 560 eligible dyads, 513 children (47% males and 53% females) who had been fed only breast milk for at least six months participated in the study. The children's mean age was 13.6 months. Two hundred and eighty-four children were 9-12 months old, whereas 229 children were 13-18 months old. The prevalence of ECC (including either noncavitated or cavitated lesions) was 42.5% (Table 1). The mean (SD) d 1 mft and d 1 mfs scores were 1.07 (1.41) and 1.34 (1.99), respectively. The mean d 1 mft and d 1 mfs scores and I-ECC according to the children's age, are shown in Table 2. The older children (13-18 months old) had significantly higher d 1 mft and d 1 mfs scores and I-ECC compared to the younger children (9-12 months old) (Wilcoxon rank-sum test, p < 0.001). Bivariate analysis of potential variables related to the prevalence of ECC, caries experience (d 1 mfs score), and I-ECC are displayed in Table 3. Several significant factors associated with higher ECC prevalence were as follows: higher dental plaque scores, breastfeeding to sleep, ad-lib feeding, no oral cleaning, cleaning less than twice a day, starting oral cleaning after six months old, and not using fluoride toothpaste (logistic regression, p < 0.05). Similarly, a higher level of dental plaque, breastfeeding to sleep, no oral cleaning, cleaning less than twice a day, and not using fluoride toothpaste were significantly associated with higher d 1 mfs scores (negative binomial, p < 0.05). Three significant variables (a higher plaque score, breastfeeding to sleep, and no oral cleaning) were significantly associated with I-ECC (Poisson regression, p < 0.05). Table 4 shows the results of the final model of significant factors related to ECC prevalence, caries experience (d 1 mfs), and I-ECC. After adjusting for potential confounding factors, children's age, breastfeeding to sleep, oral cleaning, and dental plaque were significantly associated with ECC prevalence (p < 0.05). Children with dental plaque covering more than middle-third of their tooth surfaces had a significantly higher chance of having ECC, 75.60 times as likely (95% CI: 40.19-142.20, p < 0.001), compared to those with less dental plaque. Children who were breastfed to sleep were 2.85 times (95% CI: 1.48-5.49, p = 0.002) to develop ECC, compared to those without this behavior. Older children who received no oral cleaning had a higher chance of having ECC (p < 0.05). Regarding the risk factors related to caries experience (d 1 mfs), four significant variables were level of dental plaque, oral cleaning practice, breastfeeding to sleep, and children's age (p < 0.05), whereas the level of dental plaque was the only significant variable associated with I-ECC (p < 0.001). Plaque index = modified Greene and Vermillion index; OR = Odds ratio; IRR = Incidence rate ratio; 95% CI = 95% confident interval. Discussion ECC was reported to be very high in Southeast Asia [3]. So far, risk factors of ECC in exclusively breastfed children have not been well documented in developing countries in this region, compared with those in developed countries, where living conditions and child-rearing practices are considerably different. Up to now, few studies have been conducted on exclusively breastfed children in Thailand. Among children with various feeding practices, a previous study reported that ECC prevalence was very high in Thailand: 2% in 9-month-olds, 22.8% in 12-month-olds, and 68.1% in 18-month-olds [19]. Similarly, our study found that the ECC prevalence of breastfed children was high (42.5%) and escalated with age. Several behavioral risk factors, such as oral cleaning and feeding practices, were found to be significant. Among all potential risk factors studied, dental plaque accumulation was the key risk factor for ECC with regard to the prevalence, magnitude, and intensity. Children who had dental plaque accumulation covering over middle-third of the total anterior tooth area had a significantly higher risk of developing ECC (75 times) than those who had less dental plaque. These findings are in agreement with the care pathways for managing caries in young children [23]. An individualized caries risk assessment by collecting information regarding the amount of dental plaque, dental hygiene, and feeding practices should be developed based on existing programs such as vaccination programs to prevent ECC in these high caries risk children. Our results showed that children who did not have their teeth cleaned had a significantly higher caries risk (8.5 times), compared with those who did. This finding is in line with previous studies supporting the importance of cleaning the erupting teeth and soft tissues of infants [24,25], possibly allowing children to become accustomed to oral cleaning practices. Interestingly, although the recent evidence demonstrated the effectiveness of fluoride toothpaste in decreasing the caries increment in primary teeth [26], the present study showed no correlation between ECC and the use of fluorides in the very young age population. The reason might be that most of the previous studies were done in the older age group than the population we studied. The effectiveness of fluoride toothpaste in infants needs to be further studied. Another systematic review also demonstrated that the frequency of tooth brushing was associated with caries incidence or increment [27]. On the contrary, the frequency of daily oral cleaning (once or less vs. twice or more) was not associated with caries status in the study population. These conflicting results may be due to a unique characteristic of caries development in very young children. Possibly, the reported higher frequency of oral hygiene practices may not truly reflect the higher efficiency of plaque removal. The quality of oral cleaning, as assessed by observing dental plaque, would be a more valid risk indicator of ECC. These findings concur with a previous study indicating that the self-reported frequency and method of oral cleaning practices by caregivers did not determine the cleanliness nor did it correlate with caries development [28]. Inappropriate nursing behavior is another risk factor for dental caries development. Several studies have demonstrated that a child falling asleep while suckling milk with various feeding practices had an increased risk of ECC development [15,29]. These findings are in accordance with the present study. The study children who were breastfed until asleep were at a higher risk of caries than those who were not. During the night, the salivary flow rate typically slows down, thus reducing the ability of flushing milk residue, and eventually facilitating caries initiation. Based on the results of the present study, the higher plaque score posed the greatest risk among all factors studied (Table 4). This implies that dental plaque plays a major role, whereas the reported oral health-related behaviors may be a secondary concern. Notably, ECC prevalence increased markedly in older children by month. Screening and identifying children, in particular those with high caries risk, at a very young age may be crucial to prevent new caries and reduce the intensity of the disease. Our findings showed that most of the lesions in infants (9-12 months old) were categorized as noncavitated caries, which can be reversed or halted by brushing with fluoride toothpaste and modifying feeding practices. Minimally invasive interventions such as sodium fluoride varnish and silver diamine fluoride should be adopted in controlling ECC at the early stage [30]. However, in Thailand and other places, parents are unlikely to bring their baby or young child to see a dentist. Primary health care providers may play a vital role to promote infants' and toddlers' oral health [31]. They can perform dental screenings by merely lifting a child's upper lip to determine the presence of dental plaque on upper anterior teeth and assess the caries risk of children during the first year of life. Nevertheless, further study is required to assess the benefit of routine oral screenings and noninvasive interventions, if necessary, performed by primary health care providers. The present study had some strengths, such as a sufficient sample size. Information on potential caries risk factors, including feeding patterns and oral hygiene practices, was comprehensively collected for controlling the confounding factors. Recall bias regarding the feeding and oral hygiene practices at the time that children were 9-18 months old would be lower, compared to that of the ECC study conducted in children aged 3-5 years. However, the study had some inherent limitations. The duplicated examinations for assessing the reliability of examiner were impractical in this study setting. Some potential confounders that may affect the outcomes such as family income, parent's educational level, and Streptococcus mutans and lactobacilli were not included in the analysis. Based on the nonprobability sampling used, sampling bias could occur. Caution is warranted in interpreting these results to make inferences about general populations. The nature of the cross-sectional study that evaluated the ECC prevalence at one point in time may hinder the causal relationship between ECC and child-rearing practices. A well-designed cohort study adopting the probability sampling method is required to confirm the effect of feeding habits and oral cleaning behaviors on ECC among exclusively breastfed children from birth to preschool age. Despite the unfavorable oral health outcomes reported in the present study, the benefits of breastfeeding are unparalleled. We affirm that breastfeeding should not be discouraged. Instead, key determinants of ECC among breastfed children should be further investigated, and effective interdisciplinary-based preventive measures should be implemented in the early childhood stage. Conclusions The prevalence of ECC is high among exclusively breastfed children aged 9-18 months in Thailand. Their ECC prevalence and caries experience are significantly associated with the level of dental plaque, children's age, breastfeeding to sleep, and oral cleaning practices, whereas the I-ECC is only associated with the level of dental plaque.
2020-05-07T09:15:16.912Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "c14abfbd580c0e88ec3fbbc15a90ab0787bf8bb1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph17093194", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "462d71ee7a8c7e786962683aaa0be86d61b9c672", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }